i can't help you with being an artist-brand, as it's something i've never done and probably never will, but i think i can offer some insight and answers anyway.
first some broad context/thoughts that aren't in direct response to your questions:
i've never used instagram/facebook, but i used to be active on tumblr, where the browser extension tumblr saviour provided a way of blacklisting any tag. after many years of constantly adding/removing tags to blacklist, and sometimes toggling saviour on/off entirely, i came to understand the fundamental issues with tag blacklisting for personal comfort:
1) you're relying on everyone to use that tag when they discuss the subject in question. even if you account for variations in spelling, synonyms, and so on, you can never completely prevent yourself from seeing something because sometimes it won't be tagged. you can partly deal with this by blocking the people who consistently don't tag, but even the most well-meaning social media users will make a mistake and simply forget to tag something, or make a typo in the tag and thus render it useless for blacklisting purposes.
2) you're relying on everyone to use that tag
in the way you want it to be used, which is honestly a bigger issue than 1. there will be people using a tag for something that is very much not what you want the tag to be used for. some people will post stuff you want to see under a tag you have blacklisted. you will have no way of knowing this if no-one reblogs (or whatever instagram calls that) these posts without the tag, giving you the chance to see it, but it's something to think about.
tumblr saviour's whitelist, which could be by poster or by tag, could partially help with this. if you generally didn't want to see posts about school, and thus blocked the obvious tag of #school, but you still want to see posts about fish, you could whitelist #fish and see any post that is tagged #fish even if it were tagged #school.
this whitelist could occasional cause unexpected interactions, like someone posting about the time their school served fish for lunch, but in general it does help.
i don't know if instagram has a whitelist, but if it does i recommend trying it if your blacklist proves to cause problems.
no amount of careful blacklist curation can entirely prevent you from seeing something you don't want to,
even if everyone is behaving themselves and you don't have to deal with trolls. be aware that the more tags you blacklist the less you will see of the stuff you actually do want to see, even if you can't imagine wanting to see anything under those tags. there will be times when things intersect in ways you don't imagine.
i don't want to see hate either. in fact that's why i gave up on the many queer-focused subreddits. they're all full of ragebait.
but, you need to remain consistently aware of the fact that you're not seeing people talk about racism and transphobia and sexism because you blacklisted it all. not because they're not talking about it.
creating a safe space for your own mental health is a good thing. but do it with the awareness that you're doing it. remember that it means your experience with the site is no longer the same as everyone else's.
Looking ahead, as I consider creating a public profile online for my art, I'm exploring services like CommentGuard, which offers paid solutions to automatically filter out hateful comments. Has anyone used CommentGuard or a similar service? What has your experience been with content moderation tools?
i've never used anything like that, and i don't expect to, but, i do have some serious concerns about commentguard specifically:
commentguard advertises itself as "ai-powered", but does not clarify what that means. i would distrust any company advertising itself as doing anything "ai-powered" at the moment, but the fact that they don't even explain what that means or which features, specifically, are "ai-powered" makes me incredibly suspicious. none of the possibilities here are good:
a) one or more llms are actually involved in the automatic processes they advertise. this in itself is a bad sign, as llms are still incredibly flawed and heavily biased even when they're working properly. as a queer artist, you are going to attract a lot of comments that the model will not be able to respond to appropriately.
b) it's just advertising bullshit and it isn't powered by any llms at all, and is good old-fashioned word filtering. issues with this older style of bot moderation aside, if they're lying about it that's bad anyway. dishonest buzzword advertising like this is never a good sign. if they genuinely have good software, they could advertise it honestly.
c) and i consider this to be, overwhelmingly, the most likely possibility: the system isn't automated (or is barely automated, see b) and they're using clickfarms. this isn't the place to go into greater depth about that but a huge quantity of supposedly "automated" systems, especially automated moderation as we're seeing here, aren't automated at all and are done entirely by real humans. real humans who are unpaid (and, quite often, abused) for their work and live in poverty.
in any of the three cases, the quality of moderation this service provides will be poor. the precise nature of why that is may very, and your comfort with these possibilities will likely vary, but there's no good outcome here. i really do not trust commentguard. everything about this looks like a disaster waiting to happen.
it doesn't help that the majority of discussion i can find about commentguard online is from a purpose-made advertising accounts masquerading as sincere recommendation. what genuine-looking discussion i do find seems to recommend other companies (usually napoleoncat) alongside commentguard, in the manner a real human tends to do. i'm going to be honest, i strongly suspect napoleoncat also employs underpaid clickfarms (a lot of these sorts of services do, and based on the fact that the country that most often visits the napoleoncat.com domain is india, and the service is over 15 years old, i won't believe they aren't without evidence), but at least they aren't lying about being ai-powered. the bar is on the ground.
i can't answer your other two questions because i am not a brand, however, the way this one is worded is odd to me so i want to respond anyway:
For those who manage or participate in public forums like this one, which is notably respectful and inclusive, is there a lot of active moderating done by Melon or the admins? Do you use specific filter lists, or is the community self-selecting in a way that naturally discourages hate?
this is a forum, not social media. it doesn't work in a corporate way, because it isn't corporate. it's a place where people behave like people, and this is generally encouraged. sometimes they're rude, because sometimes people are rude.
my experience as a standard user has been that moderation is generally pretty light. some subjects see more moderation than others, mainly because melon likes to keep certain topics taboo here in the name of making the place more positive. that works on a tiny forum with an average of one new post an hour. it would not work on forum that sees heavier use (and i am on such forums, managing taboo topics is a losing battle), and definitely not a more corporate site like instagram.
there were some disagreements in a thread about copyright recently, and as far as i can tell the only moderation that occurred was a reminder to be more polite about it. (i could, of course, have missed some deleted posts).
mostly, though, it's this:
is the community self-selecting in a way that naturally discourages hate?
web revival and self-made web spaces like this tend to inherently self-police. the movement itself is very punk, and attracts a lot of old tech punks (like myself). punks have always been good at self-policing, that is in fact a primary tenet of punk philosophy. this means both discouraging and correcting hateful behaviour in others; and recognising and correcting it in oneself (which can sometimes mean choosing to leave).
this isn't a lesson most folk are formally taught, it's just something a lot of them naturally arrive at after enough exposure to this kind of behaviour in others, or in trying to practice the behaviour they'd like to see more of in others. though honestly i think it would be good if it were more formally taught in some way.