Excessive Report Button Abuse Across Chatrooms, Chatbot Ill Equipped To Moderate

In this essay, I will mention my reasonings for supporting having different chatbots across different domains, operate on different rules, so that the same robo moderator isn't trying to prevent abuse across different domains.

There is an increasing tendencies for chatrooms to implement a moderation chatbot, which is fine however there still needs to be human oversight in determining who gets kicked or banned from the chatroom. In this one place, the same chatbot is used to moderate all of the chatrooms, essentially undermining the goal of decentralized the moderation overhead. There is this one user that’s basically abusing the report button, and using that to basically enforce their idealogy across chatrooms with different social policies.

Any reasonable person would view this as basically being authoritarian, an extreme abuse of power. But this abuse of power is nurtured in an environment where there are competing goals in this one decentralized artificial intelligence community. As it stands, there is no way to prevent this person from abusing their power across all of the chatrooms. The chatbot used to moderate is simply not equipped to handle moderation across different domains.

Because everything happens so quickly in that social space, by the time you’ve noticed anything was going on with the chatbot, someone is already private messaging you asking you to strip naked if you don’t specifically block them. So you have an authoritarian chatbot that tries to moderate all of the different chatrooms, and yet has no real power to stop genuine abuse by creepers online, who ask abusive questions like “do you understand complex math” to people who it should just be taken for granted that they understand complex math, unless they specifically state otherwise.

Some of the good aspects about the development community, unlike other groups: it’s possible to have a constructive conversation about how to install certain company docker images. I’ve also met some fellow open source developers in this space. Who knows what might end up coming up with that social relationship.

But generally, be mindful that chatbots are not yet ready to be deployed to moderate a bunch of different chatrooms. A simple work around solution might be to employ a different chatbot for each chatroom, as I’m not entirely against automated moderation. But there should still be measures to prevent report button abuse. Whether that’s limiting the amount of times a user can report a specific person. Something needs to be done to prevent the abuse.