The steady stream of people who are telling me that the Santa moderation bot is going to delete anyone who’s downvoted or disagrees with the group, is continuing unabated.

Here’s an olive branch: You’ve got a point. It’s just a black box and I juggle the parameters to some secret process to ban the people who got some downvotes, I can understand how that comes across as toxic. I might or might not be lying about taking careful time to look over its judgements and make sure that I think the impact is more positive than negative, but at the end of the day, it doesn’t matter. You still have to trust my intentions and trust the bot to make good decisions, and trusting that to an automated system rarely works out well.

To me, delegating the moderation of the community to the segment of that community that’s trusted and consistently upvoted by the rest of us is better than giving it to a handful of people who wield unilateral power according to random rules. I like the bot’s judgements most of the time when I look at them. The question is simply whether this algorithm is actually doing that delegation effectively, or if it’s just banhammering anyone who gets a couple of downvotes. I’m confident that it’s doing the first thing almost all of the time.

In talks behind the scenes with other moderators, I’ve been going into a lot of detail about specific users and going back and forth about judgements. I also do a ton of checking behind the scenes. I don’t want to do that publicly. I think it would be deeply informative to post a list of the “top ten” and “bottom ten” users, and go into detail about why the low-ranked users got where they are, but that’s probably not a good idea.

What I would like to do is share that information on some level, so that people can see what’s going on, instead of it being me relaying that everything’s good. It’s tough because I can’t break down every level of detail without invading all kinds of people’s privacy. That said, I do think that there’s a way to be found to open up the process so people can see and give input to what’s going on.

One happy medium I could do would be to have the bot post its spot-check automatically about once a week. It could pick out one random user who’s barely on the borderline, and post a couple of the worst comments they made. Usually, when I’m messing around with its parameters, that’s what I am trying to do. There are some comments that are clearly toxicity that have no business anywhere. There are some comments that are clearly free speech, and even if they’re getting downvotes, they deserve to be heard. Then there are some comments that are on the borderline between. My goal is to set up the parameters so that the borderline rank value for a ban matches up with the users who are on that borderline.

I can see some upsides and downsides to posting that publicly. What do people think, though? What would you want to see, in order to make an informed decision about what you think of this whole approach?

  • Five@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    I love this – Reddit used to do a yearly thing where they’d send you your top upvoted and downvoted posts and comments that was always nostalgic and fascinating to me as a user. Like canvas, I think it’s an idea worth copying with a more federated framework.

    Maybe you could write an action that allows Fediverse members to get a similar breakdown and visualization automatically generated and then delivered to them via direct message. People who are curious about how the bot works can message the bot and see how it views them, and then they can share the details publicly if they so choose. I think this could be really popular.