Pre-moderating trouble makers


(Alena Rybik) #1

Hi guys, I wanted to ask you opinion on conditional pre-moderating. Our forum has recently seen a surge of toxic and negative activity. The initial cause is the problems with the product (game), but it’s spiralling into a very nasty direction when good members who want to participate in constructive discussions stop interacting because their posts get buried in flames. Negativity and bashing are contagious. I have 7 volunteer moderators, which is ok to get the job done in a normal situaion, but with the amount of violations we see right now we simply are not able to act quickly enough. Posts sit out there for hours before being removed and do the harm.

As an interim measure I am thinking of creating a hidden group of “trouble makers” - members whose posts will need to be premoderated before going public. If a user’s posts have been moderated at least 2 times, they end up in that group and their posts go public only after one of the moderators approve it.

What do you think?


How To Deal With A Negative Member?
(Bo McGuffee) #2

We pre-moderate people all the time for a variety of reasons. However, we also have 24/7 moderation, so posts won’t sit long before they get attention.

My primary concern here would be to maintain a culture of cooperation. I would remind everyone that there is a vision for community (which would need articulation), and that to that end there are boundaries on what is acceptable behavior.

Once expectations are established, moderate those who cross the boundaries, and remind them that the reason they are being moderated is because their behavior is disruptive to the goal of the community. If they prove that they are willing and capable of upholding basic standards (which would need to be defined), then the need for moderation would disappear and they could post freely.

If, however, the behavior continues, then their presence is detrimental to the communal goal, and they need to be removed. Whether it comes to this is entirely up to them, and entirely based on their behavior.


(Sarah Hawk) #3

I like @irreverance’s approach here. While I agree with your broad approach I’d try to find ways to turn this into a positive long term behavior change. Does your platform allow for crowd-sourced moderation in the way that Discourse does? If not, is there a way that you can encourage a change in the group mindset?

If people know that they’ll be added to a pre-mod group, it will be interesting to see if they work to get out of it. I’d do it, but I’d be prepared for things to get worse before they get better.


(Sarah Hawk) #4

Some of our members working in large gaming communities may have come up against these issues or something similar.

@Andrej_Raider @Maisha_Andriessen @Jeffrey_Otterspoor @Mjbill @Anthony_Williams I’d love to hear whether any of you have an opinion/insights on this topic.

Or perhaps @Erik_Martin in your time at Reddit?


(purldator) #5

I rather like how League of Legends crowd-sources their moderation (accolades go to @erlend_sh, as he brought up this situation at Discourse Meta).

Not only does the user base have the power to flag abusive behaviors, but they also have the power to make final judgments regarding punishment.

No one cares about a small group of individuals above them. “Cares” in this context points to the idea of staff seen as an oligarchy.

This is something I have seen so many times in gaming communities. At its most pathological it ends up poisoning all. Poisons staff. Poisons the user base. The oasis transforms into a cesspool.

Instead, hit 'em where it hurts.

Let their peers make the decision if they should be banned or suspended; for the latter, how long.

I feel this ties into the good points addressed by @irreverance above. My ideas inspired by LoL’s crowd-sourced moderation become the strategies (the how) tying to @irreverance’s precise definitions.

The following strategy/idea places utmost focus on the behavior and not the user who did the behavior.

And here it be; a practical method using Discourse’s trust levels: allow a single volunteering regular a chance to pass a possible judgement/punishment by sending the offending message to them with all context removed.

If it still shows as abrasive, negative and not suited for the community’s health (Miller Test), then have the volunteer-regular offer a proper punishment suited to the behavior.

Actual appointed moderators now focus less on the user who did the unwanted behavior, and on the user-peer making the decision/punishment that they then carry out; aye, based on judgments as a moderator.

(I know this does not wholly deal with pre-moderation. It may reduce the need for it; of course, barring any given community’s facets that would make this idea implausible.)


(Alena Rybik) #6

Thanks for your replies everyone, sorry that it took me a while to jump back into this conversation, we’ve been moving our community to a new(er) platform.

Unfortunately, not.

@purldator: Thanks for the excellent link, found some useful tips there! I really the idea about giving respected members of community a chance to make a final call regarding punishment.


(Sarah Hawk) #7

If you do pre-moderate, let us know how it goes @alenarybik
I’d be very interested in the results that you find.