https://www.reddit.com/r/modnews/comments/10gcle7/reddits_defense_of_section_230_to_the_supreme/
created by sodypop on 19/01/2023 at 20:40 UTC*
517 upvotes, 59 top-level comments (showing 25)
Dear Moderators,
Tomorrow we’ll be making a post in r/reddit to talk to the wider Reddit community about a brief that we and a group of mods have filed jointly in response to an upcoming Supreme Court case that could affect Reddit as a whole. This is the first time Reddit as a company has individually filed a Supreme Court brief and we got special permission to have the mods cosign anonymously…to give you a sense of how important this is. We wanted to give you a sneak peek so you could share your thoughts in tomorrow's post and let your voices be heard.
TL;DR: The Supreme Court is hearing for the first time a case regarding Section 230, a decades-old internet law that provides important legal protections for anyone who moderates, votes on, or deals with other people’s content online. The Supreme Court has never spoken on 230, and the plaintiffs are arguing for a narrow interpretation of 230. To fight this, Reddit, alongside several moderators, have jointly filed a friend-of-the-court brief arguing in support of Section 230.
When we post tomorrow, you’ll have an opportunity to make your voices heard and share your thoughts and perspectives with your communities and us. In particular for mods, we’d love to hear how these changes could affect you while moderating your communities. We’re sharing this heads up so you have the time to work with your teams on crafting a comment if you’d like. Remember, we’re hoping to collect everyone’s comments on the r/reddit post tomorrow.
Let us know here if you have any questions and feel free to use this thread to collaborate with each other on how to best talk about this on Reddit and elsewhere. As always, thanks for everything you do!
--------------------------------------------------------------------------------
Comment by Ninja-Yatsu at 19/01/2023 at 22:22 UTC
248 upvotes, 14 direct replies
From my understanding, the summary of Section 230 is that volunteer moderators are not held legally responsible for someone else's comments if they miss it/fail to delete it. Particularly an issue with defamation.
In other words, changing that could mean that if someone posts something messed up and nobody reports it or you miss it, then it's pretty much held by the same rules as a newspaper allowing an article in their paper and you get into legal trouble.
If that's what happens, it might not be worth my time or effort to moderate.
Comment by CapnBlargles at 19/01/2023 at 20:43 UTC
61 upvotes, 2 direct replies
Is there a link to the section we can reference/review prior to the post tomorrow?
Comment by Living_End at 19/01/2023 at 20:48 UTC*
122 upvotes, 2 direct replies
What is section 230, how does it effect me, and why should I care? This information should be in this and the public Reddit post.
Comment by [deleted] at 19/01/2023 at 21:15 UTC
30 upvotes, 1 direct replies
[deleted]
Comment by YoScott at 20/01/2023 at 03:32 UTC
16 upvotes, 2 direct replies
Check out Zeran v. AOL. I was an employee moderator when the event happened that resulted in this case. Damn it was a nightmare.
If section 230 were limited or removed, I personally will stop moderating anything.
Thanks /u/sodypop for posting about this. This is way more important than people consider.
Comment by Lisadazy at 19/01/2023 at 21:37 UTC
21 upvotes, 3 direct replies
Serious question: As a non-American can someone please explain to me how this ruling effects me? Or is it only for American based users that this law applies?
Comment by Watchful1 at 19/01/2023 at 21:07 UTC
21 upvotes, 3 direct replies
With the current political environment, are you at all optimistic that such briefs make any difference in the decisions of the supreme court?
Comment by lukenamop at 19/01/2023 at 21:55 UTC
38 upvotes, 1 direct replies
I just finished reading the brief. Well writ, and in support of the autonomous actions of volunteer moderation across Reddit. Thank you for supporting the wide variety of communities built on this platform!
Comment by hansjens47 at 20/01/2023 at 00:09 UTC
8 upvotes, 1 direct replies
On page 11 of the brief:
A given subreddit might even decide to increase or decrease the visibility of posts by users with certain karma scores.
I know you can make automod rules to limit posting based on karma thresholds, but that doesn't really fall in under increasing/decreasing post visibility.
Comment by SnowblindAlbino at 20/01/2023 at 04:41 UTC*
6 upvotes, 1 direct replies
Just FYI for folks that are interested, SCOTUSblog does a good job of explaining the case[1] in question (Gonzalez v. Google) so it's easier to understand how (and why) this is ending up at the Supreme Court (i.e. in part because Justice Thomas basically asked for such a case in 2020). Their page on the case links to a bunch of resources including all of the other amicus briefs[2] previously filed. If you have the time and energy to read through some of them you can learn a lot about the case, what's at stake, and who is on which side. For example, it looks like Sen. Josh Hawley, the National Police Association, the AGs of 26 different states, the Counter Extremism Project, the National Center on Sexual Exploitation, the Zionist Organization of America, and many others have written in support of the petitioners-- i.e. they support the narrower reading of sec 230 that Reddit, Inc., opposes.
2: https://www.scotusblog.com/case-files/cases/gonzalez-v-google-llc/
On the other side-- those filing amicus briefs supporting Google (as Reddit is doing) --are mostly tech companies, free speech organizations, academic/legal experts on this issue (including Eric Goldman[3]), and the like.
Yet another group have filed briefs that support *neither* side, including Sen. Ted Cruz, the Institute for Free Speech, the Lawyers Committee for Civil Rights Under Law, the Anti-Defamation League, and the Giffords Law Center to Prevent Gun Violence. So it's a real mix. And a complicated case, at least to this non-expert, as I read through the briefs and try to make sense of the arguments as they are presented. There's also morass of case law being cited, so it would be cool to have someone with a strong legal background on the CDA and related legislation explain this in more depth.
Comment by Stetscopes at 20/01/2023 at 07:12 UTC
6 upvotes, 0 direct replies
Just hearing of this and... wow. We don't even get paid to do this and we're doing it out of pure passion to the communities we handle. If we're held accountable why even moderate a community.
Thinking of if this gets passed, what's it going to be like for those users based on outside of the US? Will we be held accountable too? Since reddit is US-based, will reddit comply to ban communities not following? If so, will there be any punishment for us? It just feels like a lose-lose situation in all of this.
We'll need to be more proactive, admins have more work to do, and what's more we get held accountable for things people say and do which we have nothing to do with other than removing and banning. There's also posts which don't get reported. Feels like Article 13 all over again.
Comment by bluesoul at 20/01/2023 at 04:50 UTC
11 upvotes, 2 direct replies
As far as how it would affect myself and any hypothetical mod teams I'm around, it's easy. We would programmatically delete every post ever made to the subreddit in question, take the subreddit private, and probably delete our accounts after.
I was reading about this earlier this morning and although the case is narrower in what it's trying to handle with Section 230, it's still broad enough to be a huge legal liability. When people hear "algorithms", the picture in their head is some huge advanced black-box system that magically determines things. And the reality is an algorithm is also just math, or a simple set of instructions. An upvote is part of an algorithm. Pinning a post, or stickying a comment, is part of an algorithm. Practically any mod action could be seen as a recommendation for what to do or what not to do, for what you should look at and what you shouldn't.
Is that how the court will see it? I have no idea. Could someone sue me, my team, Reddit, and everyone else just to find out? You bet your ass, and most simple arguments I can think of would have standing.
(Is deleting the subreddit's contents an overreaction? I'm not convinced that it is. *Ex post facto* might or might not go out the window if one of those posts is later edited, deleted, whatever. I'm not a lawyer, and the risk vs. reward is a no-brainer in favor of just purging the contents outright.)
This place is pretty good, but no. Can't expose myself or my family to that kind of legal liability.
Comment by LizzeB86 at 20/01/2023 at 04:17 UTC
6 upvotes, 1 direct replies
If I’m going to be held legally liable for content in my boards I’ll be done with moderating. I’m not risking a fine or worse for something someone on here posts.
Comment by bisdaknako at 20/01/2023 at 09:03 UTC
4 upvotes, 0 direct replies
I just think about the amount of users who have said they're working to hack me or dox me (little do they know I'm behind 5 firewalls and I use a private tab). I think giving them a legal avenue to report me and have the government do their doxing for them, is not so swell.
Comment by WorkingDead at 20/01/2023 at 12:22 UTC
6 upvotes, 1 direct replies
Are you aware of any moderators, especially on the major news or politics subs, that are working on the behalf of government agencies? Political parties?
Comment by Zak at 20/01/2023 at 00:59 UTC
11 upvotes, 3 direct replies
Something important to keep in mind here is that the role of the court is not to decide what the policy *should* be, but to interpret the laws that already exist as they relate to each other and to a concrete situation.
It seems pretty clear to me that a plain reading of section 230[1] *does* protect recommendation algorithms even if they recommend something illegal. Recommendation algorithms are tools that "pick, choose, analyze, or digest content", and cannot be treated as the publisher of third-party content.
1: https://www.law.cornell.edu/uscode/text/47/230
I'm not sure the law *should* protect the latest individualized recommendation algorithms. Nothing like them had been conceived at the time it was drafted (at least, not at scale), and their potential to suck vulnerable people down rabbit holes of harmful and tortious or criminal content is extreme. A change in law would be the appropriate way to address the issue, although I fear what that would look like. Last time[2] congress tried something like that, it was awful.
2: https://en.wikipedia.org/wiki/FOSTA-SESTA
I don't know how to draft a law that distinguishes between those algorithms and search engines or something like reddit that uses a ranking mechanism not individualized in the same way.
Comment by i_Killed_Reddit at 20/01/2023 at 09:35 UTC
4 upvotes, 0 direct replies
A lot of headache for a volunteer job which is done for free, if this goes ahead.
Would probably stop moderating.
Comment by Khyta at 20/01/2023 at 11:55 UTC
3 upvotes, 2 direct replies
Are European mods also affected by this?
Comment by [deleted] at 20/01/2023 at 10:50 UTC
3 upvotes, 0 direct replies
Looks like it’s coming to the end of my moderating stint on Reddit. As volunteers, we face enough bullshit as it is just from crazed users who threaten us, attempt to dox, attack, generalise and brigade us - now this? Moderating isn’t worth it anymore. If it ever was.
Comment by DreadknotX at 20/01/2023 at 13:49 UTC
3 upvotes, 0 direct replies
At that point you would need to get some kind of insurance for the sub we moderate and with what money. This would destroy the site!
Comment by vbullinger at 20/01/2023 at 14:29 UTC
3 upvotes, 0 direct replies
Here's EFF's take on it: https://www.eff.org/issues/cda230
Comment by cyrilio at 19/01/2023 at 22:40 UTC
9 upvotes, 1 direct replies
I mod /r/Drugs where moderation is critical for legal, information, and harm reduction reasons. Considering that last year 110,000 Americans died from drug poisoning[1] and 1.16 million Americans are were arrested for drug related offenses[2]. This brief seems like a good thing.
1: https://www.cdc.gov/nchs/nvss/vsrr/drug-overdose-data.htm
2: https://drugabusestatistics.org/drug-related-crime-statistics/
What would actually help is off course better drug regulation and education....
Comment by OPINION_IS_UNPOPULAR at 19/01/2023 at 22:34 UTC
7 upvotes, 0 direct replies
I like the selection of subreddits highlighted. ;) Very appropriate for your audience. Give your general counsel a raise!
Also, TIL about the Heisman Trophy
Comment by happyxpenguin at 19/01/2023 at 22:25 UTC
6 upvotes, 0 direct replies
Based on the summaries of both court cases and the question presented before the court. I seriously see the court ruling in the plaintiffs favor. I think it'd be different if the plaintiffs were suing because JoeSchmoe got his post removed from /r/playstation because he posted about a game on /r/wii or something. But these cases allege that Google, Twitter, et al. are providing material support for terror attacks and organizations by failing to remove terrorist accounts/posts and in some cases, recommending them to users.
Comment by Jadziyah at 20/01/2023 at 04:11 UTC
5 upvotes, 0 direct replies
I wonder if Discord moderators are being made aware of this?