Wikipedia:Universal Code of Conduct/2021 consultation

This is an old revision of this page, as edited by WJBscribe (talk | contribs) at 18:12, 12 April 2021 (→‎How would a global dispute resolution body work with your community?: badly, most probably). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Latest comment: 3 years ago by WJBscribe in topic Global questions
This page contains discussion topics for the Universal Code of Conduct community consultation from April–May 2021. For more information, see the 2021 consultations page and the Universal Code of Conduct overview.

Request for comment: Universal Code of Conduct application

The Wikimedia Foundation is seeking input about the application of the Universal Code of Conduct.

The goal of this consultation is to help outline clear enforcement pathways for a drafting committee to design proposals for a comprehensive community review later this year. The proposals may integrate with existing processes or additional pathways that may be suggested. For more information about the UCoC project, see Universal Code of Conduct overview.

Discussions are happening on many projects and are listed at the 2021 consultations page.

Please discuss in the subsections below and let me know if you have any questions. Xeno (WMF) (talk) 22:32, 5 April 2021 (UTC)Reply

Consultation structure

There are five topics with several questions to help start conversations. Feedback provided will have a significant impact on the draft for enforcement guidelines that will be prepared following the comment period.

  • Please do not feel obligated to answer every question, focusing on what is important or impactful. We understand giving opinions on this topic can be difficult.
  • While it will be necessary to describe experiences in a general way, these discussions are not the place to re-visit previously decided matters which should be handled at a more appropriate location.
  • Each topic has several questions to help understand how the Universal Code of Conduct might interface with different communities.
  • For answers to some frequently asked questions, please see this page.
Please note
If you wish to report a specific incident, please use existing pathways. If that is not an acceptable pathway, outlining in more general terms why the existing process does not work will be useful. Please avoid sending incident reports or appeals to facilitators or organizers. The people organizing discussions are not the staff that handle specific abuse reports or appeals and are not able to respond in that capacity.
Community support
  1. How can the effectiveness of anti-harassment efforts be measured?
  2. What actions can be taken and what structures should be available to support those being targeted by harassment and those that respond to conduct issues?
  3. What formal or informal support networks are available for contributors? What is necessary for these groups to function well, and what challenges are there to overcome?
  4. What additional opportunities are there to deliver effective support for contributors? What would be useful in supporting communities, contributors, targets of harassment, and responders?
Reporting pathways
  1. How can reporting pathways be improved for targets of harassment? What types of changes would make it more or less challenging for users to submit appropriate reports?
  2. What is the best way to ensure safe and proper handling of incidents involving i) vulnerable people; ii) serious harassment; or iii) threats of violence?
  3. In your experience, what are effective ways to respond to those who engage in behaviours that may be considered harassment?
  4. In what ways should reporting pathways provide for mediation, reform, or guidance about acceptable behaviours?
Managing reports
  1. Making reporting easier will likely increase the number of reports: in what ways can the management of reports be improved?
  2. What type of additional resources would be helpful in identifying and addressing harassment and other conduct issues?
  3. Are there human, technical, training, or knowledge-based resources the Foundation could provide to assist volunteers in this area?
  4. How should incidents be dealt with that take place beyond the Wikimedia projects but are related to them, such as in-person or online meetings?
Handling reports
  1. In what ways should reports be handled to increase confidence from affected users that issues will be addressed responsibly and appropriately?
  2. What appeal process should be in place if participants want to request a review of actions taken or not taken?
  3. When private reporting options are used, how should the duty to protect the privacy and sense of safety of reporters be balanced with the need for transparency and accountability?
  4. What privacy controls should be in place for data tracking in a reporting system used to log cross-wiki or persistent abusers?
Global questions
  1. How should issues be handled where there is no adequate local body to respond, for example, allegations against administrators or functionary groups?
  2. In the past, the global movement has identified projects that are struggling with neutrality and conduct issues: in what ways could a safe and fair review be conducted in these situations?
  3. How would a global dispute resolution body work with your community?

Discussion

Community support

How can the effectiveness of anti-harassment efforts be measured?

  • I am not sure effectiveness of effort is something that can be measured, other than in the negative - sure, if reports of harassment go up, we know our efforts are NOT effective. But a decrease in reports does not necessarily mean our efforts were effective... it could just mean fewer people had confidence that if they report harassment it will stop. Blueboar (talk) 01:46, 6 April 2021 (UTC)Reply
  • I agree that this is not measurable. Reports will likely go up because structures are put in place; that doesn't mean more harassment happens. In fact, I think that every study I've participated in with respect to harassment on Wikimedia projects has been tainted by the fact that there was no way to say things like "it happened once 8 years ago, but never since" or "it happened and was well addressed by the community". (In other words, those studies never measured the frequency of harassment or the effectiveness of existing solutions.) Frankly, if the only purpose of the UCoC is as an anti-harassment tool, then we're doing it wrong. Risker (talk) 02:16, 6 April 2021 (UTC)Reply
  • I think we probably could figure out an indirect measurement by measuring perceived levels of harassment, if we gave it some good thought. A survey, ongoing or intermittent, that invites randomly-chosen accounts to participate. Should be super simple, something along the lines of "1. Have you personally experienced or witnessed harassment within the past 30 days? (Terrible question, but just for a general idea.) Numbers go up or down over (some tbd time period, I agree that we shouldn't assume an initial period of upticking is evidence to the contrary, as it could simply be an increase in awareness of what constitutes harassment), but eventually we have some indirect measure. OR: allow editors to flag posts by others that include either a ping to them or are on their own talk page. This last is IMO something we should have been doing for the past ten years, but of course it requires a developer and some assessment/response mechanism. Again, this would be only an indirect measure; both methodologies are actually measuring levels of perceived harassment. —valereee (talk) 14:15, 6 April 2021 (UTC)Reply
  • Create a facility to research the quality of talk page interactions. It would be very interesting to grab a large sample of talk page interactions and ask volunteers to rate them as constructive or problematic. This could serve several purposes. First, it could help us create a more real set of examples of what the community considers ok and not ok. Second, it could maybe help us train a bot to notice possibly problematic exchanges. Third, it could provide data over time as to whether things are getting better or worse. Chris vLS (talk) 15:14, 9 April 2021 (UTC)Reply

What actions can be taken and what structures should be available to support those being targeted by harassment and those that respond to conduct issues?

  • Those experiencing harassment can report it to administrators and ultimately ArbCom. This usually works. If we create a formal system beyond this, things become too bureaucratic. Harassment issues often need a more flexible non-bureaucratic approach. Blueboar (talk) 01:52, 6 April 2021 (UTC)Reply
  • There are two types of harassment. One is obsious cases like off-wiki harassment or situations where on-wiki harassment can be proven with one or two diffs. These probably can be solved by existing structures (though I have doubt about support, some psychological help service would be good, but I am not sure we can afford it). Another type when things are happening in small steps, and one needs a hundreds of diffs to see anything and another hundreds of diffs to see whether this is really one-side harassment and not a situation where one side of a dispute wants to get advantage by calling actions of the other side harassment. So far nobody on the English Wikipedia, including the ArbCom, was not willing to launch investigation, find the diffs, understand the situation, and work out the solution. The only structure willing to do it was T&S, but it is not scalable. I am not sure what scalable structure we could have here, but we can think about one.--Ymblanter (talk) 05:53, 6 April 2021 (UTC)Reply
  • I'd like to see a button that can be used to flag posts as harassment. I don't know how this works to keep it from becoming its own form of harassment -- maybe you only get one such flag a month? Maybe it's a right that can be removed for abusing it? —valereee (talk) 14:19, 6 April 2021 (UTC)Reply
    Moved discussion of this idea to #How can reporting pathways be improved for targets of harassment? What types of changes would make it more or less challenging for users to submit appropriate reports?
  • I think meaningful resources should be available to those who feel harassed for any reason. MWF benefits from a vast amount of volunteer labour and these days has access to considerable financial resource. It would be good if some of that was directed to providing help for the volunteer community at the heart of the project. WJBscribe (talk) 10:15, 7 April 2021 (UTC)Reply
  • Let's not copy the Silicon Valley model too closely: Community values -- upheld by the community -- work more powerfully than other measures. Most social networks don't have a purpose (other than ads), so there is no agreement within the community about what is or isn't ok. So they have to hire moderators to enforce a policy. Wikipedia is the opposite. We have a purpose. Our community has a strong consensus about what is constructive. We enforce a vast array of policies millions of times a day. Having an openly edited encyclopedia shouldn't be scalable, but the community makes it so. The same is true of harassment. If it is instilled like other community values, regular editors will call it out. The most powerful tools would be those empowering regular editors do to this, maybe something like an MOS for talk page interactions, so it is as easy to tell someone they've crossed into harassment as it is tell them they bolded too many words in the lead sentence. Finally, to answer the question, we should consider a separate noticeboard. The "button" question depends on many details, would need to consider specific designs. Chris vLS (talk) 15:28, 9 April 2021 (UTC)Reply
    • One issue with a MOS for talk page behaviour is that our existing literature on the topic (e.g. WP:NPA – "Comment on content, not contributor") already very clearly prohibits most forms of unconstructive behaviour that people engage in. However, these very policies/guidelines/essays are used as ammunition to wikilawyer with. If they're not ignored outright, it becomes a discussion about "well actually you've only pointed to an essay" or "if you had a proper argument you would be able to employ it here rather than commenting on my tone" or "NPA doesn't negate the fact that competence is required" or the other ridiculous things people know they can get away with saying, despite the statements being egregiously rude and obviously written out of spite. And people are extremely creative with their rudenesses, so I don't think a full rulebook "all the things not to do" is achievable. The problem is a culture and community with a consistently bad tone, such that there's no way to enforce any of the rules we have without banning 90%+ of our contributors in some topic areas. On the other hand, this is exactly what you're talking about with the need for the community to have good behaviour as ingrained, intrinsic values, rather than approaching this like social media moderation, to which I agree. — Bilorv (talk) 22:07, 9 April 2021 (UTC)Reply

What formal or informal support networks are available for contributors? What is necessary for these groups to function well, and what challenges are there to overcome?

  • No formal networks as far as I am aware of, unless we count T&S as network. Informally, pretty much depends on the connections. I am (almost) not on social media, I am not a member of any Wikiprojects, I have been to a few meetups but I am certainly not a Wikimeetup regular - which means if I am in trouble I am basically on my own. If I manage to formulate the issues better than my opponent can, I probably would not lose much beyond my time spent, but if my opponent has a lot of time, determination, a number of friends, and does not screw up really badly, my case is hopeless whatever I do.--Ymblanter (talk) 18:34, 6 April 2021 (UTC)Reply
  • The lack of current structured support leads inherently to people feeling isolated or forming small groups for informal support. Such groups are likely to share similar world views and may not view issues objectively as a result given the diverse backgrounds of contributors. As noted above, the WMF has access to financial resources to provide meaningful support to the volunteer community. It should be looking at the kind of support for stressed/harassed employees which are commonly provided by other corporations. WJBscribe (talk) 10:17, 7 April 2021 (UTC)Reply

What additional opportunities are there to deliver effective support for contributors? What would be useful in supporting communities, contributors, targets of harassment, and responders?

  • Fix our admin tools so that they are fit for purpose. MER-C 17:56, 6 April 2021 (UTC)Reply
  • See my comments above. Treat volunteers editors, admins etc at least as well as you would paid employees doing the same job and invest the same resources. What resources are available to WMF paid staff who report feeling stressed/harassed? The same should be given to volunteer contributors. WJBscribe (talk) 10:19, 7 April 2021 (UTC)Reply
  • I believe it was in WP:AHRFC where I first saw the idea: WMF-funded anti-harassment workshops and bystander intervention training for community members. Many of us simply do not have the skills or experience to support contributors who come to us with fears of harassment or other long-term conduct issues. These are largely social and interpersonal skills, not technical tools, so they need to be developed beyond the various admin toolkits. Using Foundation funds to create a sizeable group of contributors who can intervene in situations, de-escalate, and properly respond to allegations of abuse will make the community more resilient in the long term not only because we will have people able to handle issues properly, but this group can further train other volunteers to create a self-sustaining cultural institution like the DRN or ArbCom. Wug·a·po·des 00:36, 9 April 2021 (UTC)Reply

Reporting pathways

How can reporting pathways be improved for targets of harassment? What types of changes would make it more or less challenging for users to submit appropriate reports?

  • At the very least, there should be a page with a flowchart/wizard/algorithm that tells the user where is the best place to report it and what evidence they need to provide. MER-C 17:19, 6 April 2021 (UTC)Reply
  • Each project should be looked at individually in this regard, and given help in strengthening its processes. I do not believe this should be centralised, save in respect to the most serious incidents (i.e. those that may need to involve law enforcement), or for very small projects that lack dispute resolution mechanisms. WJBscribe (talk) 10:21, 7 April 2021 (UTC)Reply
  • If we are talking about long-term low-level harassment, it is notoriously difficult to report because (i) one should be able to make a convinving case, using many diffs, which most users are not capable to do; (ii) there should be someone willing to look at this report, wading sometimes through hundreds or thoudands of diffs, and even this might be not enough, because the behavior of the reporter or even the third partics might be not ideal as well, and the whole episodes need to be taken into account. To be honest, I do not see this happening, but at least there should be clear instructions and may be even training on how good and clear reports can be written,--Ymblanter (talk) 15:58, 9 April 2021 (UTC)Reply
  • I'd like to see a button that can be used to flag posts as harassment. I don't know how this works to keep it from becoming its own form of harassment -- maybe you only get one such flag a month? Maybe it's a right that can be removed for abusing it? —valereee (talk) 14:19, 6 April 2021 (UTC)Reply
  • Interesting idea. I also agree that a prominent "Report" button/link will be helpful. --Titodutta (talk) 03:52, 8 April 2021 (UTC)Reply
I too think a red-flag / report button could be useful. I was going to create a section under Discussion, but then I saw your mention of it here, valereee. The way I pictured it, we wouldn’t investigate every report, but only users or diffs that crossed some threshold of report counts. It would be an option you activate on a specific diff, like you do for Thanks. How to avoid organised POV groups from abusing the system? How to allow people to indicate "no, I don’t think this edit was unacceptable"? Who's going to volunteer to review the red-flag lists? Perhaps after the a diff crosses its threshold, and gets listed, the Report button could change to a set of Voting buttons (to avoid people piling on the Report button after a listing). There would be s lot of detail to work out to get the process to functioning, but it'd be worth a try. [Should this thread be moved to "How can reporting pathways be improved..."?] Pelagicmessages ) – (02:05 Fri 09, AEDT) 15:05, 8 April 2021 (UTC)Reply
  • I like the idea, and I'm going to focus on possible flaws because I do think it is worth pursuing further. If flagged edits went to the community to discuss then people would turn up to this ANI-on-steroids with popcorn looking for drama. But I also wouldn't trust admins to monitor the system as many of the tone issues we have come from admins setting bad examples and dismissing concerns too freely (though some because they somehow ended up in almost sole charge of an extremely large and fast-paced area and already need 10 times more hours in the day to do the job justice). I'd only trust groups like checkusers, OTRS volunteers or arbs (... and definitely not the WMF), but this is out of scope for them and would be a very large additional burden which I'm not sure those small groups would want to or be able to manage. So I guess we have to get creative about how such a report system could be monitored. — Bilorv (talk) 22:07, 9 April 2021 (UTC)Reply
    (Moved discussion of valereee's comment, as suggested)
  • Rather than flagging posts as harassment, we should be removing them. Removal of harassment is an ethical obligation for all editors, not just a small group of enforcers. Vexations (talk) 01:02, 12 April 2021 (UTC)Reply

What is the best way to ensure safe and proper handling of incidents involving i) vulnerable people; ii) serious harassment; or iii) threats of violence?

  • See above. Certain things shouldn't be posted publicly. MER-C 17:19, 6 April 2021 (UTC)Reply
  • Serious harassment and threats of violence (as issues potentially requiring the involvement of law enforcement) can be dealt with by a central mechanism. Projects should be better assisted to deal with specific requirements of "vulnerable people". WJBscribe (talk) 10:23, 7 April 2021 (UTC)Reply
  • I've never (to my memory) seen an explicit threat of real-life violence, so I think they are thankfully rare, but without talking to someone who has I wouldn't be able to tell you whether they're dealt with well. On the vulnerability topic, it's just too broad a question. Are we talking about children? People with autism? People with poor mental health? Each of these groups have different vulnerabilities that are not remotely alike (and very diverse even within the group). — Bilorv (talk) 23:22, 9 April 2021 (UTC)Reply

In your experience, what are effective ways to respond to those who engage in behaviours that may be considered harassment?

  • I am not sure if I've correctly understood the phrase "may be considered" - does this mean that the behaviour is not objectively harassment, but may be perceived as such? That is a very tricky area, given the multi diverse backgrounds of our contributors, which can easily lead to good faithed mistaken perceptions of others. However, inappropriate allegations of harassment are, in of themselves, a form of harassment. If the issue is that behaviour that is not actually harassment is being perceived as such, both parties may be in need of assistance - there ought to be no "first mover" advantage in who first makes a report of harassment. For example, an editor may feel harassed by an administrator who is taking legitimately taking issue with their contributions to the project. Say the administrator noticed that they had uploaded a non-free image with incorrect tagging. The administrator, following an exchange about this with the contributor, realises that the contributor does not understand the licensing requirements, reviews the contributors' other uploads and tags a number of further images for deletion. The contributors perceives this as unwarranted personal attention and/or "stalking" of their contributions. It is important in this sort of context that the labels applied by the contributor are not unquestioningly accepted - that time is taken to explain the situation carefully to the contributor, and support is given to the administrator who is likely to feel stressed as a result of the allegations being made about them. WJBscribe (talk) 10:31, 7 April 2021 (UTC)Reply
  • Reverting an experienced editor twice and making them start the talk page discussion, curt edit summaries, following someone around because of "low quality" contributions—the difficulty is that these actions are often simply necessities to maintain the quality of the encyclopedia, as WJBscribe raises, but they are also a staple of all genuine harassment. But the community will almost always treat it as the former, even in cases where it's obviously not.
    Someone once reverted over one-hundred edits I made (edits prompted by discussion with lots of time for people to weigh in and made slowly and carefully with pauses to listen to feedback as I went) over the course of an hour while I was desperately trying to engage them in discussion in four different ways, and saying that I had real-life commitments and would be happy to discuss it with them if they just paused the reverting until we had established what the situation was. That was serious harassment to me, but to everybody else who observed the situation it was a "content dispute". Not so. I've spent many hours trying to see it from that person's point of view, as the experience deeply affected me, but if they'd spent five minutes trying to see it from mine they would not have done what they did. Especially if they'd known the way in which I was, to use the language of the question above, in the category of "vulnerable people" at the time. It doesn't matter how many of the pings I get now are friendly (currently at maybe 90%)—I always feel a sense of dread when I see the number below the bell.
    All of these things count five times over in sensitive subject areas. If you think about it, it is expected and even desirable that some of the editors most passionate about sharing and collating free information about rape will be rape survivors or have related experiences—but all the content disputes I've seen in this topic area have someone being at best flagrantly irresponsible in their tone and choice of wording. We have a similar thing going on to the #NotAllMen/#YesAllWomen talking points of the #MeToo movement, where it just takes one person out of twenty being highly aggressive to create an environment so that everyone with trauma, related grief or mental health difficulties will not be able to stay in that topic area for health reasons, and so we lose anyone acting in good faith or who is here first and foremost to share free information, rather than to argue and POV push. If the content area is the Rwandan genocide, ask "would I be happy repeating this comment out loud to someone who lost a parent in the genocide? Am I proud to be making this comment if the other editor is in such a position." Apply this to whatever heated topics you're looking at.
    The effective ways to combat these bad behaviours are by communally setting good examples and making someone the odd one out if they are starting drama or being hostile; by going out of your way to tell someone "thank you for making this comment" or expressing gratitude to someone acting in good faith; and by disengaging as rapidly as possible with someone who's in it for the argument. Warnings do not work when someone has already established "I can get away with acting like this". Admins having zero tolerance does not work in an environment like that (they don't have the power to deal with the fallout of such actions). WMF intervention does not have a chance of working. — Bilorv (talk) 23:22, 9 April 2021 (UTC)Reply

In what ways should reporting pathways provide for mediation, reform, or guidance about acceptable behaviours?

Managing reports

Making reporting easier will likely increase the number of reports: in what ways can the management of reports be improved?

  • Reports should be visible to all in an anonymized form. It should be clear to all that anonymizedaccountX filed 20 reports on fifteen different editors and anonymizedaccountY has had five reports filed about them, but the diffs and the usernames should be visible to only those with certain rights. If a report is deemed to warrant an investigation, that should be visible to all, as well as some general statement of any outcome. —valereee (talk) 14:46, 6 April 2021 (UTC)Reply
  • A committee, similar to the m:Ombuds Commission, tasked with receiving and triaging reports of abuse. These can be forwarded to groups on the local wiki or to T&S depending on the severity of the particular incident and ability for the local project to effectively handle it. Wug·a·po·des 00:41, 9 April 2021 (UTC)Reply

What type of additional resources would be helpful in identifying and addressing harassment and other conduct issues?

Are there human, technical, training, or knowledge-based resources the Foundation could provide to assist volunteers in this area?

  • I was pointed to a community discussion, Special:PermanentLink/1016202536#What we've got here is failure to communicate (some mobile editors you just can't reach) wherein a strong desire was expressed for resources to be deployed to improve the ability to reach mobile editors. There are a number of phabricator reports at the linked thread. Suffusion of Yellow is tracking these issues. Xeno (WMF) (talk) 01:20, 6 April 2021 (UTC)Reply
    • This critical bug in particular must have cost thousands of hours of volunteer time, and the WMF's failure to fix it after well over a year is indefensible. Lacksadaisical attitudes like this deprive us of sources for the next generation of Wikipedia editors, particularly as there are many countries in which smartphone is the only way the majority of the population can access the internet. It needs to be fixed, today. — Bilorv (talk) 23:37, 9 April 2021 (UTC)Reply

How should incidents be dealt with that take place beyond the Wikimedia projects but are related to them, such as in-person or online meetings?

A general concern – it doesn't make clear about whether it applies to non-Wikimedia actions. For example, suppose someone has a Twitter or personal blog or website, and they make a post which has nothing to do with any Wikimedia project. Could such a post be punished under this code of conduct? Or should actions/statements/etc which occur outside of any Wikimedia project or event, and which aren't making any reference to any Wikimedia event, be excluded? I think, statements and actions which occur outside of the context of any Wikimedia project or event, and which don't make any reference to any Wikimedia project or event, should be out of scope for any "Code of Conduct". Mr248 (talk) 00:31, 6 April 2021 (UTC) portion copied from #Mr248's feedbackReply

I feel going "outside the scope" of Wikipedia is digging a dry well. It could be a very nice well but if there is no water it is a waste of time. Editors, Admins, nor the WMF staff are world police and Wikipedia has enough to be concerned with. Otr500 (talk) 21:17, 10 April 2021 (UTC)Reply

Handling reports

In what ways should reports be handled to increase confidence from affected users that issues will be addressed responsibly and appropriately?

  • Reports should be visible to all in an anonymized form. It should be clear to all that anonymizedaccountX filed 20 reports on fifteen different editors and anonymizedaccountY has had five reports filed about them, but the diffs and the usernames should be visible to only those with certain rights. If a report is deemed to warrant an investigation, that should be visible to all, as well as some general statement of any outcome. —valereee (talk) 14:46, 6 April 2021 (UTC)Reply

What appeal process should be in place if participants want to request a review of actions taken or not taken?

When private reporting options are used, how should the duty to protect the privacy and sense of safety of reporters be balanced with the need for transparency and accountability?

  • You can't. If incidents such as i) vulnerable people; ii) serious harassment; or iii) threats of violence are taking place, absolute privacy and safety should be guaranteed to reporters. IMO all you can do is make sure those investigating these incidents are competent, diligent and empathetic people and ideally put in place some sort of clear review mechanism - perhaps some kind of committee composed of community members and trained WMF staffers - so there's a sense of accountability. Making some kind of global WMF committee seems difficult though (just look at the Ombuds), and this mechanism only works if you can recruit competent and active community members to volunteer their time to do smoke-filled work. ProcrastinatingReader (talk) 14:24, 6 April 2021 (UTC)Reply
  • Transparency should focus on the conduct being sanctioned rather than the reporter. If a case has merit, it matters little who reported the incident. What is important with transparency is that the person being sanctioned, and the community, can understand what was the act (harassment, etc.) that led to the sanction. Disclosure of the act involved does not require disclosure of the reporter. feminist (talk) 02:07, 9 April 2021 (UTC)Reply
Agree with ProcrastinatingReader on protecting those that submit reports when privacy is requested. If there are not safeguards with that respect the whole plan will crash. Also agree with Feminist There needs to be transparency. Private courts are not conducive to any remedy where there is some form of justice. Also, it should be remembered that there are usually two sides to every coin. If there are "egregious" actions and privacy concerns then that should be dealt with accordingly. "Assuming" there are privacy issues where that is not and/or performing close-door courts sets the stage for a possible kangaroo court scenario. Otr500 (talk) 21:44, 10 April 2021 (UTC)Reply

What privacy controls should be in place for data tracking in a reporting system used to log cross-wiki or persistent abusers?

Global questions

How should issues be handled where there is no adequate local body to respond, for example, allegations against administrators or functionary groups?

In the past, the global movement has identified projects that are struggling with neutrality and conduct issues: in what ways could a safe and fair review be conducted in these situations?

  • m:User:Rschen7754/Help, my wiki went rogue! summarizes these situations fairly well. The problem is that stewards have been very reluctant to take action without a very strong consensus on Meta. On one hand, I can sympathize since they are not a global ArbCom. On the other, then nobody is tasked with the problem and it is left to continue further. As far as Croatian Wikipedia, the situation was left to deteriorate from 2013-2021, when it was discovered that a local CU had violated the privacy of editors. I don't know what the solution is - but we need to do better. --Rschen7754 18:51, 11 April 2021 (UTC)Reply
  • What does “the global movement” mean? If WMF are equating the projects with some kind of social movement (beyond sharing free content encyclopaedic content), that potentially raises neutrality issues of its own. Or is it just a synonym for WMF, in which case I would like clarification as to whether this question is referring to smaller projects with a lack of diverse membership or projects such as this one? Does WMF regard enwiki as a project that is “struggling with neutrality and conduct” for example? WJBscribe (talk) 18:08, 12 April 2021 (UTC)Reply

How would a global dispute resolution body work with your community?

Any "global dispute resolution body" will likely do more harm than good if it tries to interact with the English Wikipedia. Enwiki internal governance isn't perfect, but the memory of WP:FRAM is still fresh in the minds of too many editors, and WMF's interaction with the enwiki community in that fiasco was, put simply, atrocious. feminist (talk) 16:45, 8 April 2021 (UTC)Reply

You already have an answer to this question, and it can be summarised as "Framgate". OFFICE-invoked one-year ban from en-wiki only for harassment, OFFICE not taking any action against Fram on other WMF wikis when he gave his side of events (thereby royally damaging the WMF's arguments), stonewalling from the WMF even on matters that could (and should) have been disclosed without revealing the identity of anyone who was harassed, evidence that (once the Arbitration Committee finally got to see an expurgated version of it) was deemed too flimsy to justify the action taken, and an RfC on partial blocks that turned instead into a referendum on WMF's interference with a community's self governance. Those who will not learn from the past are condemned to repeat it.A little blue Bori v^_^v Jéské Couriano 23:33, 10 April 2021 (UTC)Reply

I endorse the above comments completely. The only way I could see this working would be if the improper actions in scope were across several wikis and it was a global issue. --Rschen7754 18:48, 11 April 2021 (UTC)Reply

It is hard to see how a global dispute resolution body can work with established larger projects such as enwiki, dewiki etc. Framgate and superprotect are cautionary lessons as to the fact that volunteer communities are not looking to be ruled from above. Such a body should limit itself to handling: (a) global permanent bans arising from the most severe misconduct and (b) potentially working with stewards and global sysops on smaller projects without established processes. WJBscribe (talk) 18:11, 12 April 2021 (UTC)Reply

Additional discussion

Questions

General comments

The following links may be useful for background: (copied from meta:Universal Code of Conduct/Discussions)

English Wikipedia

(Note by Jonesey95:) The links above are copied here for convenience so that applicable excerpts of those discussions can be inserted here without having to rehash those discussions. – Jonesey95 (talk) 00:23, 6 April 2021 (UTC)Reply

Mr248's feedback

Sorry if I have put this in the wrong place I am confused about where it goes. If I have put it in the wrong place please move it. I don't have a problem with a "Code of Conduct" per se but I have some concerns about the text of this specific code of conduct:

People who identify with a certain sexual orientation or gender identity using distinct names or pronouns I have trouble remembering what pronoun to use for people and so often try to avoid using pronouns. I'm concerned that a policy might be interpreted as saying you have to use for people the pronouns they prefer, as opposed to choosing to avoid using pronouns entirely, and hence my action of avoiding using pronouns might violate the policy. Sometimes I also call people "they", by which I mean "I don't remember what pronoun to use for you so I am just using 'they' as a default". (I think it is quite standard English to use "they" as a default pronoun when you aren't sure what pronoun to use.) I am concerned some people might make a big issue of that ("they is not my pronoun!") which would be a distraction, and honestly would make me feel unwelcome.

Note: The Wikimedia movement does not endorse "race" and "ethnicity" as meaningful distinctions among people. Their inclusion here is to mark that they are prohibited in use against others as the basis for personal attacks I think that is problematic because some people identify with their race or ethnicity, and this could be read as saying officially that their choice of personal identification is invalid. For example, if a person of Italian descent identifies their ethnicity as "Italian" (or "Italian-American" or whatever), this seems to be saying their choice to consider that an important part of their own identity is invalid. Or similarly, if an African-American person identifies as "Black", this could be read as saying that their Black identity is not "meaningful", which they may well find offensive.

Hate speech in any form I am concerned that is too vague. Some people understand "hate speech" as meaning stuff like using slurs, negative stereotypes/generalisations, etc, and I don't have a problem with prohibiting that. But other people interpret it much more expansively–for example, if a person has conservative religious views on sexual morality, some people would interpret the mere expression of those views as "hate speech"–and I'm concerned about those more expansive definitions. Of course, if a person has such views, they shouldn't be using Wikipedia as a soapbox for expressing them, but they may nonetheless be revealed somehow.

A general concern – it doesn't make clear about whether it applies to non-Wikimedia actions. For example, suppose someone has a Twitter or personal blog or website, and they make a post which has nothing to do with any Wikimedia project. Could such a post be punished under this code of conduct? Or should actions/statements/etc which occur outside of any Wikimedia project or event, and which aren't making any reference to any Wikimedia event, be excluded? I think, statements and actions which occur outside of the context of any Wikimedia project or event, and which don't make any reference to any Wikimedia project or event, should be out of scope for any "Code of Conduct". Mr248 (talk) 00:31, 6 April 2021 (UTC)Reply

Mr248: thank you for your comment. This is a fine place to leave it, would it be okay if I copied some relevant portions to the question buckets above? Your last paragraph for example, would fit into #How should incidents be dealt with that take place beyond the Wikimedia projects but are related to them, such as in-person or online meetings?
There are ongoing discussions about the actual policy text itself at meta:talk:Universal Code of Conduct and meta:talk:Universal Code of Conduct/Policy text. Xeno (WMF) (talk) 00:38, 6 April 2021 (UTC)Reply
Thanks sure you can copy my comment (or parts thereof) wherever you wish. Mr248 (talk) 00:41, 6 April 2021 (UTC)Reply

Comments by Johnuniq

From UCoC 3.1 – Harassment: "Harassment ... may include contacting workplaces or friends and family members in an effort to intimidate or embarrass." The "in an effort" clause makes the sentence pointless because a perpetrator can say their contacting an editor's workplace was in an effort to reach out and help the person develop (in fact, any such unsolicited contact should be forbidden). Harassment is defined as several items almost all of which would earn the perpetrator an immediate and permanent block at enwiki—no UCoC is needed. Does anyone in the WMF imagine that sexual harassment and threats etc. are tolerated? The problematic items are insults (how do I tell someone that their English is not adequate or that their edits show they don't understand the topic or Wikipedia's role?) and hounding (it's hard to know whether use of Special:Contributions is done to protect the encyclopedia or merely to upset/discourage a contributor—in fact, good editors have to upset and discourage ungood editors every day). Johnuniq (talk) 01:51, 6 April 2021 (UTC)Reply

Comments from zzuuzz

I sometimes wonder if what I'm about to say is out of scope for what the WMF is thinking, but I think it's relevant so I'll say it anway. It addresses several of the questions posed, and none at the same time. I think it might be the elephant in the room.

I deal with an enormous amount of harassment - to me, other users, article subjects, as well as others - death threats, graphic threats of violence, threats to family members, persistent libel, doxxing, pestering, racial, sexual, you name it. The next steps are usually relatively straightforward and swiftly done in my experience - block, ban, disable email and TPA, range blocks, edit filters, and protection where we can (other lesser methods are available). In some cases we'll see a WMF ban get put in place. It just continues however, and it's usually from a relatively small group of the same people. The way I see it, a WMF global ban is not even an end goal, but usually just the start. We don't need guidelines of unacceptable behaviour to stop harassment, that is easy, we need the WMF to act in the real world, to work with ISPs, legal, PR, tech, the ordinary admins who witness it, and really anyone else they need to, in order get the crazies effectively legally and technically kicked off the site. -- zzuuzz (talk) 05:19, 6 April 2021 (UTC)Reply

  • Excellent comment that definitely reveals the elephant in the room. The UCoC might be a redundant feel-good exercise when what is needed is real-world action regarding LTAs. Johnuniq (talk) 05:30, 6 April 2021 (UTC)Reply
  • Agreed. What has the WMF done to escalate matters when WMF bans don't work? If nothing, the UCOC is at best social washing. MER-C 17:44, 6 April 2021 (UTC)Reply
  • Zzuuzz put it much better than I ever could. The only thing this will do is bother and constrain the editors who are following the rules or whom are minor nuisances. For the biggest problem editors, real-world action needs to be taken, and since WP:Abuse response - our previous effort at trying to handle this matter locally - was completely ineffectual without WMF Legal teeth, this absolutely must be handled by the WMF in a more offensive-oriented matter. Playing defence doesn't work when the enemy can just assault the fortress without any meaningful repercussions. —A little blue Bori v^_^v Takes a strong man to deny... 18:03, 6 April 2021 (UTC)Reply
  • This, and the section above/below, seems to presume that the only type of harassment is that from socks/LTAs. That may be the most voluminous, and it's certainly the type the community's established processes deal best with (block, ban, disable email and TPA, range blocks, edit filters, and protection where we can), but I'm not sure it's the most severe or difficult to deal with. It'd be nice if UCOC enforcement dealt with the problem of unblockables, and also with the problem of new editors subject to problems (esp offwiki) who are not familiar with norms and reporting mechanisms available to them (indeed, Wikipedia:Contact us has no mention of mechanisms existing, such as ArbCom's contact info). ProcrastinatingReader (talk) 18:10, 6 April 2021 (UTC)Reply

Comments by Firefly

I agree entirely with zzuzz's comments above - in my opinion the English Wikipedia handles most cases of harrassment as well as it can, by blocking offenders and the tools they use (e.g. open proxies, VPN endpoints, etc.), and requesting global locks if required in cases of cross-wiki abuse. However, this is ultimately a game of whack-a-mole. We have multiple LTAs that get hold of new proxies of various types incredibly easily and start up their lunacy once again. We need concerted action from the WMF in the following areas: (a) a system to proactively globally block open proxies & VPN endpoints, (b) a framework to request "Office contact" with ISPs whose subscribers commit serious, on-going, intractable abuse on Wikimedia projects, and most importantly (c) a formal way for admins, stewards, and functionaries on the various projects to work with the WMF to address the issues of long-term, serious abuse. Without these, the UCoC is going to achieve very, very little I fear. ƒirefly ( t · c ) 14:32, 6 April 2021 (UTC)Reply

  • I feel it worth clarifying that I don't oppose the UCoC at all, I'm just skeptical it will actually achieve very much. ƒirefly ( t · c ) 14:47, 6 April 2021 (UTC)Reply

S Marshall

At the moment, the community deals with vandals by RBI. The draft text of this universal code of conduct, at section 3.3, requires us to engage with them: it clearly and specifically rules out our current process of reverting vandals' edits and denying them the oxygen of attention. Where is the correct place to discuss fixes to the draft UCoC text?—S Marshall T/C 23:39, 6 April 2021 (UTC)Reply

@User:S Marshall: The Code has been ratified by the Board and is no longer a draft. "The Foundation’s Legal Department will host a review of the UCoC one year after the completed version of it is accepted by the Board." (FAQ#Periodic reviews) "If you see more cultural gaps in the draft, kindly bring that to our attention on the main talk page of the Universal Code of Conduct, and these issues may be included in the first or subsequent annual reviews." [emphasis added] (FAQ#Conflict with local policies)
General Question: is that first review to be one year after the Phase 1 Code ratification or after Phase 2 Enforcement Policy ratification? Pelagicmessages ) – (01:27 Fri 09, AEDT) 14:27, 8 April 2021 (UTC)Reply
Then the Board must re-think. The first bullet point of section 3.3 rules out "The repeated arbitrary or unmotivated removal of any content without appropriate discussion or providing explanation". On review in context, the wording probably does allow us to deal with obvious vandals via RBI, but it denies us RBI with LTA cases, POV warriors, and most areas that are of interest to Arbcom.—S Marshall T/C 14:40, 8 April 2021 (UTC)Reply

WJBscribe

I am working my way through the questions above. In the meantime I wanted to raise a concern about the language of the UCoC as drafted. It includes the following:

"Insults: This includes name calling, using slurs or stereotypes, and any attacks based on personal characteristics. Insults may refer to perceived characteristics like intelligence, appearance, ethnicity, race, religion (or lack thereof), culture, caste, sexual orientation, gender, sex, disability, age, nationality, political affiliation, or other characteristics. In some cases, repeated mockery, sarcasm, or aggression constitute insults collectively, even if individual statements would not. (Note: The Wikimedia movement does not endorse "race" and "ethnicity" as meaningful distinctions among people. Their inclusion here is to mark that they are prohibited in use against others as the basis for personal attacks.)"

The note is problematic for a number of reasons:

  1. What is "the Wikipedia movement"? Is this a synonym for WMF, the Board, or is it an attempt to speak for all contributors on all projects?
  2. A contributor to this project may feel that their "race" and "ethnicity" are an extremely meaningful part of their self identity. Ironically, they may feel harassed if these characteristics are dismissed out of hand. Saying that these are not endorsed as "meaningful distinctions" is a potentially divisive political statement. It has not place in the UCoC.
  3. Why are only "race" and "ethnicity" single out, does that mean that by implication WMF (or worse all of us, if that is what Wikimedia movement means) do endorse the other characteristics listed as meaningful distinctions amount people (e.g. caste, disability?)!?!

The note requires urgent attention. I am seriously troubled that the Board appears to have endorsed this language. WJBscribe (talk) 10:42, 7 April 2021 (UTC)Reply

Comment by Stifle

I concur with S Marshall, Firefly, and zzuuzz. This appears to be a great deal of effort being discharged in dealing with the wrong problem. Vandals (interpreted widely) don't care about rules and codes of conduct. Making more rules won't deter them. Stifle (talk) 11:03, 7 April 2021 (UTC)Reply

Feminist

One must keep in mind that there are local differences in the prevailing standards of human rights. Despite their universal nature, international human rights treaties are always implemented on a contextual basis, taking into account local differences as to economic development, culture, social norms and politics. This applies equally to the WMF Universal Code of Conduct as well. Application of the UCoC to local wikis must – and I repeat, must – take into account the prevailing cultural and economic background of the average editor of that wiki. For example, depending on the context, some may consider use of the term Latinx to be necessary for gender neutrality, while others may consider use of the term to be culturally imperialist. How would the WMF handle local differences in enforcing the UCoC? Will the WMF potentially add fire to the conflict via enforcement, or will it seek to encourage mutual acceptance of different approaches?

I also concur fully with WJBscribe. These are material concerns with the way the UCoC is drafted. The UCoC should be amended to address these concerns before it is enforced.

Finally, the justifications for the UCoC (under the Why we have a Universal Code of Conduct section) are not terribly convincing. A set of justifications focusing on ensuring Wikimedia covers content from diverse perspectives and maximizing social benefit for editors and readers would be much more convincing than the current text which simply involves the WMF Board of Trustees professing blind faith towards a set of ideals. feminist (talk) 05:37, 8 April 2021 (UTC)Reply

With regards to your last paragraph, I gave similar feedback in October at m:Talk:Universal Code of Conduct/Policy text/Archives/2020#Poorly explained which was not taken into account in later drafts. Wug·a·po·des 00:52, 9 April 2021 (UTC)Reply
Good to hear. If the WMF is unwilling to listen to the community even at the drafting stage, how can we trust them to apply the UCoC with full regard to the contexts and needs of local projects? feminist (talk) 02:02, 9 April 2021 (UTC)Reply

Comment by otr500

Wikipedia would be a good first stop on the "additional resources" trip. Some clarity of the word harassment would be helpful. I have mentioned this before. Misconduct is conduct not "generally" regarded as appropriate. A simple definition of harassment would be: the act of systematic and/or continued unwanted and annoying actions of one party or a group. The policy on Harassment gives a definition: "Harassment is a pattern of repeated offensive behavior". When one editor "attacks" another editor it violates more than one policy on the first instance. This should be reflected as attacks and harassment.
The main caption of WP:5P4 states: Wikipedia's editors should treat each other with respect and civility. A key word to be noted is "should". Any form of personal attacks "should" be a red flag to be dealt with yet that page includes: "This page documents an English Wikipedia policy". It describes a widely accepted standard that all editors should normally follow. The link is to Use common sense with the question and answer: Why isn't "use common sense" an official policy? It doesn't need to be; as a fundamental principle, it is above any policy. The policy on "No personal attacks" includes harassment under the subsection Recurring attacks: Recurring, non-disruptive personal attacks that do not stop after reasoned requests to cease can be resolved through dispute resolution. In most circumstances, problems with personal attacks can be resolved if editors work together and focus on content, and immediate administrator action is not required. If I am the only one that sees a problem with this entire paragraph all of this is in vain.
If comments are derogatory, and serious enough to create a hostile environment, it is disruptive. "If" a personal attack (direct or Ad hominem) is serious it should not have to rise to the level of being allowed to occur several times or be considered egregious before it is deemed unacceptable.
Wikipedia already has separate classes for the seriousness of attacks or harassment. Those considered "Never acceptable" are classified as severe or egregious. When an editor personalizes comments it is usually in the form of an attack. It is still serious even if not to a level of egregious and should not be ignored. I just saw where an Admin blocked two editors for violating Wikipedia:DOX and then requested oversight so we have active Admins willing to protect Wikipedia as well as editors.
Insulting or disparaging an editor is a personal attack regardless of the manner in which it is done.
Wikipedia:WikiBullying should be presented at WP:PROPOSAL so it can be thoroughly vetted by the community. After all: (This page in a nutshell:) Bullying is not permitted on Wikipedia, and any violators will be blocked from editing. WP:BOOMERANG should not be a consideration if a legitimate report is given. No one should ever fear coming forward to make the community aware of a bullying concern.
Wikipedia touts being a civil community and harassment (and disruption) is contrary to this spirit and damaging to the work of building an encyclopedia. Maybe we should push that Wikipedia is a place where anyone can edit in a civil manner. The way to address harassment is to not be so lenient on any editor that "attacks" another. Maybe it is time to elevate "No personal attacks" and "No harassment" to a fundamental principle.
Most of the solutions for addressing "attacks and harassment" are already on Wikipedia. Sometimes they are not as clear as they need to be (watered down with words like "should") because we associate that any "rule" could be subjected to WP:IAR and that should not be the case in these instances. Otr500 (talk) 03:07, 11 April 2021 (UTC)Reply