[meta] Moderation tools
Motivation
Open projects need moderation. Dealing with the kinds of incidents that have happened recently on GitLab.com and similar sites (see #18608 (closed) for more specific discussion) requires fine-grained, easy-to-use moderation tools. No one likes waking up to thousands of angry or troll comments, and it is even worse when there's nothing you can do about it.
Proposed Implementation Plan
Chunk 1:
- Implement reporting (#30281 (closed))
- Implement moderation log backend, scoped to project
- Visual and UX design for locking by owner only
- Implement locks (#18608 (closed)) and set them up to leave logs
Chunk 2:
- Visual and UX design for moderation buttons, popup
- Implement bans with optional duration, adding log entries (https://gitlab.com/gitlab-org/gitlab-ce/issues/35943)
- Add secondary indices to moderation logs - moderator and moderated user
Chunk 3:
- Design & implement frontends for moderation log in 4 contexts:
- for project
- for namespace
- filtered by moderated user
- filtered by moderator
Chunk 4:
- Visual and UX design for editing features of moderation
- Implement content deletion and mod editing, with log entries
- Add a "moderator" role or otherwise extend moderation ability past owners
Chunk 5:
- Implement non-moderator reporting
- Does not add to public log, decide on messaging pathway
Chunk 6:
- Design "Extract derail" moving comments feature
- Implement "Extract derail" and log entries
Scenarios
Different kinds of projects will need different moderation workflows based on their own governance, CoC, etc. The kinds of moderation situations I want to focus on are:
- large public projects, many contributors
- examples: npm, rails, gitlab
- actions should be public, auditable
- decisions need to be made quickly
- clearly show what behavior is unacceptable
- small projects, targeted externally
- examples: pronoun.is, bumblebee, rouge
- silent deletion: no troll satisfaction
- private audit logs
- maximize signal/noise by cutting out obvious trolling
Toolset
I think we are capable of designing a simple toolset that meets a wide variety of different needs. In particular, the toolkit I would like to propose is through the following features:
- Content deletion, mod editing
- immediately remove or edit out offensive content
- Non-moderator "report"
- notifies moderators of offensive content
- Bans
- restricts issue, comment, MR creation for entire namespace (still allows read actions)
- restricts referencing any content in namespace elsewhere on the site
- optionally temporary, reallowing access after x amount of time
- unless content is deleted, shows up inline in the comment ("this user was banned [for x time] for this content")
- Locks https://gitlab.com/gitlab-org/gitlab-ce/issues/18608
- disallows anyone adding comments to an issue or MR
- optionally temporary
- Extract derail
- move comments to new issue
- Audit log (optionally public or private)
- when public, a moderator's actions are visible, as well as users' moderation history
- where private = visible to moderators
For self-hosted instances that do not wish to have moderation, I would suggest one of the following:
- making the UI elements involved unobtrusive enough as to not get in the way
- providing a global switch to turn off all moderation
Of these, I prefer the first if possible, since even within self-contained corporations the ability to lock threads or move conversations will be useful.
Workflows
I see these tools being useful for both kinds of projects. The workflows would be:
workflow for large public projects:
- Project owners select mods (owner implies moderator)
- Uses public audit log
- Lock threads when there is nothing more to be said, or temporarily to provide space for maintainers to deliberate about a decision
- Mods use temporary bans at will, while taking their time to decide for permanent bans
- Mods extract derails into separate issues to maintain focus
- Mods edit out particularly offensive content (images etc)
workflow for small personal projects:
- Usually one owner/moderator
- Private audit log ("have i dealt with this person before")
- Owner deletes troll content, doesn't have to deal with it at all
- Owner can easily delete content + block user in one action
UX draft
= moderator is logged in, viewing a comment/issue from @example.
UX options: [moderate] [move] [moderation log]
[moderate]: (popup)
Moderate content:
(*) No action
( ) edit content:
[editor (grey unless selected)]
( ) delete content
[ ] Ban @example from (user/org) (for [1hr/6hr/24hr/1wk/1mo])
[ ] Close thread
[move]: (small popup)
Move to: [autocomplete issues, starting with recent, empty = new]
- comment disappears from thread
- comment appears in new issue in the order it would have given the original timestamp
- bonus: bulk move
[moderation log]: (new page)
Previous moderation actions for @example in context of [project]
= non-moderator is logged in
new comment/issue from @example: [offensive content]
UX options: [report] ([moderation log] if public)
[report]: (popup)
Report @example for:
[dropdown of reasons, including other]
[comment, optional]
= banned user is logged in
"Leave a comment" is grey / button red "You are banned from this project [for x time] [for [content]]
[report] is grey or not present
= banned user tries to create content elsewhere that mentions this project using GLFM
"You cannot mention [project], because you are banned [for x time] [for [content]]"
= after mod edit
"[Edited by @moderator at TIMESTAMP]"
= after ban (unless deleted)
"[@example was banned [for x time] for this content]"
= after comment deletion
"(a comment was deleted)"
= after issue/MR deletion
- doesn't show up in issues/merge request list, or search
- 404 when manually navigating or externally linking to the page