Reddit’s Defense of Section 230 to the Supreme Court

https://www.reddit.com/r/reddit/comments/10h2fz7/reddits_defense_of_section_230_to_the_supreme/

created by traceroo on 20/01/2023 at 17:15 UTC*

1904 upvotes, 155 top-level comments (showing 25)

Hi everyone, I’m u/traceroo a/k/a Ben Lee, Reddit’s General Counsel, and I wanted to give you all a heads up regarding an important upcoming Supreme Court case on Section 230 and why defending this law matters to all of us.

So, what is Section 230 and why should you care? Congress passed Section 230 to fix a weirdness in the existing law that made platforms that try to remove horrible content (like Prodigy which, similar to Reddit, used forum moderators) more vulnerable to lawsuits than those that didn’t bother. 230 is super broad and plainly stated: “No provider or user” of a service shall be held liable as the “publisher or speaker” of information provided by another. Note that Section 230 protects users of Reddit, just as much as it protects Reddit and its communities.

Section 230 was designed to encourage moderation and protect those who interact with other people’s content: it protects our moderators who decide whether to approve or remove a post, it protects our admins who design and keep the site running, it protects everyday users who vote on content they like or…don’t. It doesn’t protect against criminal conduct, but it does shield folks from getting dragged into court by those that don’t agree with how you curate content, whether through a downvote or a removal or a ban.

Much of the current debate regarding Section 230 today revolves around the biggest platforms, all of whom moderate very differently than how Reddit (and old-fashioned Prodigy) operates. u/spez testified[1] in Congress a few years back explaining why even small changes to Section 230 can have really unintended consequences, often hurting everyone other than the largest platforms that Congress is trying to reign in.

1: https://www.redditinc.com/blog/hearing-on-fostering-a-healthier-internet-to-protect-consumers/

Which brings us to the Supreme Court. This is the first opportunity for the Supreme Court to say anything about Section 230 (every other court in the US has already agreed that 230 provides very broad protections that include “recommendations” of content). The facts of the case, Gonzalez v. Google, are horrible (terrorist content appearing on Youtube), but the stakes go way beyond YouTube. In order to sue YouTube, the plaintiffs have argued that Section 230 does not protect anyone who “recommends” content. Alternatively, they argue that Section 230 doesn’t protect algorithms that “recommend” content.

Yesterday, we filed a “friend of the court” amicus brief[2] to impress upon the Supreme Court the importance of Section 230 to the community moderation model, and we did it jointly with several moderators of various communities. This is the first time Reddit as a company has filed a Supreme Court brief and we got special permission to have the mods sign on to the brief without providing their actual names, a significant departure from normal Supreme Court procedure. Regardless of how one may feel about the case and how YouTube recommends content, it was important for us all to highlight the impact of a sweeping Supreme Court decision that ignores precedent and, more importantly, ignores how moderation happens on Reddit. You can read the brief[3] for more details, but below are some excerpts from statements by the moderators:

2: https://www.supremecourt.gov/DocketPDF/21/21-1333/252674/20230119145120402_Gonzalez%20-%20Reddit%20bottomside%20amicus%20brief.pdf

3: https://www.supremecourt.gov/DocketPDF/21/21-1333/252674/20230119145120402_Gonzalez%20-%20Reddit%20bottomside%20amicus%20brief.pdf

Ultimately, while the decision is up to the Supreme Court (the oral arguments will be heard on February 21 and the Court will likely reach a decision later this year), the possible impact of the decision will be felt by all of the people and communities that make Reddit, Reddit (and more broadly, by the Internet as a whole).

We encourage all Redditors, whether you are a lurker or a regular contributor or a moderator of a subreddit, to make your voices heard. If this is important or relevant to you, share your thoughts or this post with your communities and with us in the comments here. And participate in the public debate regarding Section 230.

Edit: fixed italics formatting.

Comments

Comment by reddit at 20/01/2023 at 17:15 UTC

1 upvotes, 1 direct replies

Please see thread for the full comments submitted by the moderators who signed onto the Brief with us.

Comment by [deleted] at 20/01/2023 at 17:39 UTC

77 upvotes, 2 direct replies

How will this impact mods who aren’t in the USA? I know a lot of mods aren’t us based. I did see a few comments talking abt it, but not too much info. Most mod teams are filled with people from different spots in the world.

I know I wouldn’t be modding anymore if this passes. Being held legally accountable for what other people upload isn’t very encouraging. Subs that have tens of millions of members will find it impossible to keep up with everything that is going on and even some smaller subs for sensitive topics could also find it overwhelming.

Comment by AbsorbedChaos at 20/01/2023 at 17:35 UTC

250 upvotes, 6 direct replies

If they pass this off and hold moderators liable for other people’s actions/words, this would only further their ability for them to control and now legally prosecute us for what we post on the internet. This is not okay by any means. We all need our voices heard on this one. Whether you agree or disagree with my comment or Section 230, please make your voices heard.

Comment by shiruken at 20/01/2023 at 17:49 UTC

108 upvotes, 6 direct replies

In order to sue YouTube, the plaintiffs have argued that Section 230 does not protect anyone who “recommends” content. Alternatively, they argue that Section 230 doesn’t protect algorithms that “recommend” content.

On Reddit, users drive the recommendation algorithms by upvoting and downvoting content. If the plaintiffs are successful in their argument, could this mean that the simple act of voting on Reddit could open a user up to liability?

Comment by QuicklyThisWay at 20/01/2023 at 17:48 UTC

38 upvotes, 2 direct replies

What will be done if this goes the wrong way? Will Reddit’s servers and/or headquarters move to another country if legal action starts being taken against moderators? Will community funds be available for legal fees for those that choose to stay and fight?

I moderate several communities, but one in particular gets hateful content on a daily basis. We try our best to take action on what is reported, have AutoMod set up to help remove hateful content, but we aren’t and likely won’t go through every post and comment.

On a separate note, there are communities like r/iamatotalpieceofshit that highlights terrible people. The person posting there likely isn’t advocating for what is being shown, but will the moderators then become liable for hateful content? I posted something there which was then crossposted to other subreddits that approved and praised the hateful content. As a result, my account was suspended for a few days.

There are many communities on Reddit that have VERY specific context that don’t redeem the content of the post, but vilify it. If there is no gray area in this ruling, all of those communities will be in big trouble. What is already a thankless volunteer activity for most will become a burden not worth continuing.

Comment by platinumsparkles at 20/01/2023 at 17:51 UTC

19 upvotes, 2 direct replies

We encourage all Redditors, whether you are a lurker or a regular contributor or a moderator of a subreddit, to make your voices heard. If this is important or relevant to you, share your thoughts or this post with your communities and with us in the comments here. And participate in the public debate regarding Section 230

How exactly can we participate in the public debate? Just by commenting support on this post?

What more can we do?

As one of the many volunteer mods here, I strongly disagree with altering Section 230 or interpreting it in a way that wouldn't protect us from being unnecessarily sued.

The plaintiffs have argued that Section 230 does not protect anyone who “recommends” content. Wouldn't that be all users of Reddit since that's what the upvote button is for? This could potentially mean all mods since we need to "approve" things that get reported but aren't rule breaking.

Alternatively, they argue that Section 230 doesn’t protect algorithms that “recommend” content. If that would affect our ability to use moderation tools like auto-mod I'm not sure how moderating a large growing sub would be possible.

Comment by graeme_b at 20/01/2023 at 18:44 UTC

19 upvotes, 1 direct replies

Section 230 is the very foundation of discussion on the internet. It should be changed with *extreme* caution. Though things may happen on the internet that legislators or judges do not like, it is far better to devise a solution tailored to the specific problem they dislike.

Making an extremely broad based change in response to a particular situation is the epitome of bad cases making bad law. A general change would have massive and unforeseeable unintended consequences. We can guess at some first order impacts but the second order effects would dwarf those.

Comment by Python_Child at 20/01/2023 at 17:51 UTC

20 upvotes, 4 direct replies

As an American moderator, what would happen to us moderators if this act was to be approved? Can we be sued, arrested, fined for moderation?

Comment by GoryRamsy at 21/01/2023 at 04:26 UTC

12 upvotes, 1 direct replies

This is not only Reddit history, this is internet history.

Comment by OGWhiz at 20/01/2023 at 21:03 UTC

11 upvotes, 2 direct replies

I'm Canadian, so I don't quite know how this would affect me, but it's a dangerous precedent to set regardless. I moderate one extremely large generic community, and then a handful of other smaller communities that focus on true crime. The true crime ones specifically have lead me to making multiple reports to the FBI due to people posting some pretty alarming things that glorify mass shooters. In some cases, arrests were eventually made[1]. In others, it was too late and a shooting took place[2]. In even more cases, I don't know the outcome.

1: https://www.courier-journal.com/story/news/crime/2019/05/03/man-accused-plotting-shooting-kentucky-high-school-new-charges/3660686002/

2: https://www.indystar.com/story/news/crime/2022/12/21/greenwood-mall-shooting-fbi-provide-update-into-mall-shooting-elisjsha-dicken-jonathan-sapirman/69740284007/

I've also made FBI reports regarding illegal pornography being linked to Reddit. If they pass this off and hold me liable, as well as other moderators and admins liable for other people's words and actions, I will not be here to make those reports to the FBI. I won't be here to ensure these true crime communities discuss the topics in a civil manner with no glorification.

Yes, this is a hobby I do outside of my normal day of work. Yes, it's something I do for the enjoyment of bettering the communities I participate in. That doesn't make it an easy thing to do. I've been subjected to illegal pornography that I've had to report. I should not be liable for that if I miss it out of the tens of thousands of comments being posted daily. I should not be responsible for someone making posts about going to a mall and murdering people and then proceeding to do so. I do take it seriously, I do report it to proper authorities whenever I see it. But if I'm not online and that happens, am I going to be to blame?

Maybe I'm misunderstanding this. I'm not a US Citizen. I don't have an understanding of the Supreme Court. I don't have an understanding of Section 230 other than what I've read here today.

That said, I reiterate that this is a dangerous precedent to set.

Comment by Kicken at 20/01/2023 at 18:00 UTC*

21 upvotes, 3 direct replies

While I am not enlightened on the exact details of 230 or what would change if it were modified/overturned, I can say that if I, and my other moderators, were held liable for all content posted to any subreddit I moderate, I would find it hard to justify continuing to moderate. I believe it would be impossible to actually staff a moderation team at that point, as well, even if I took the risk myself. As such, these subreddits would become even less maintained and at risk of hosting harmful content. Seems counterintuitive.

Comment by [deleted] at 20/01/2023 at 21:44 UTC

7 upvotes, 2 direct replies

[deleted]

Comment by mikebellman at 20/01/2023 at 17:51 UTC

18 upvotes, 3 direct replies

This is pretty heady stuff. I can’t even say for certain that I grasp the entirety of the scope of this. Social media (and personal technology overall) grows and changes faster than any legislation could ever keep up with. Plus since tech is more or less a loose international community, reigning in various companies hosted in different countries makes it even more problematic. Let alone companies who choose to relocate their servers and HQ.

I think we can all agree that protections and safety for end users and minors supersede any entertainment value of content. I hope Reddit continues to be a more positive influence in social network spheres. It has certainly changed a lot these past years.

I’m glad to be an old redditor (14 years) and put most of my trust in this forum. Even when I’ve been unfairly dicked-around by volunteer mods who aren’t mindful of the ban-hammer.

All the best,

Real name: Mike Bellman

Comment by [deleted] at 20/01/2023 at 18:35 UTC

5 upvotes, 2 direct replies

[deleted]

Comment by [deleted] at 20/01/2023 at 23:26 UTC

5 upvotes, 1 direct replies

[deleted]

Comment by esberat at 20/01/2023 at 18:52 UTC

10 upvotes, 1 direct replies

Section 230 is the law that says websites are not responsible for third-party content and cannot be sued for that content.It protects all websites and all users of websites when there is content posted by someone else on their website.

e.g; If I post something defamatory on reddit it tells me that the victim can sue me but not reddit, but also that reddit has the right to moderate the content on its site as it sees fit. Discussions about 230 started after the 2016 elections, with the discourses of democrats and republicans for different reasons but serving the same purpose (repeal/revoke). While some of these discourses were about incomplete information and/or misinterpretation of the law, some were deliberately false.

argument 1: "when a company starts moderating content it is no longer a platform but a publisher and should define itself as a publisher and take on legal obligations and responsibilities, lose its protection under 230"

The argument that there is publisher and platform separation in 230 is completely false. The idea that the immunity provided by the law can be won or lost depending on the platform or publisher is a fabrication. because there is no adjective that a website should have in order to be protected under 230. moreover, online services did not define themselves as platforms to gain 230 protection, they already had it.

At no point in a case involving 230 does it matter, as it is not necessary to determine whether a particular website is a platform or a publisher. The only thing that matters is the content in question. If this content was created by someone else, the website hosting it cannot be sued. If twitter itself writes a fact-check and/or creates content then it is liable. this is 230's simplest, most basic understanding: responsibility for content rests with the online creator, not whoever hosts the content.

Regardless of 230, you can be a publisher and a platform at the same time. meaning you can be a publisher of your own content and a platform for others' content. such as newspapers. They are a publisher of self-written articles and a platform for self-published but not self-written content.

argument 2: 'section 230 is good for big tech'

It benefits us internet users more than 230 big tech. It supports free speech by ensuring that we are not responsible for what others say.

argument 3: 'a politically biased website is not neutral and should therefore lose 230 protection'

There is no neutrality requirement in 230. The law does not treat online services differently because of their ideological neutrality or lack thereof. The site does not lose its protection under 230 whether neutral or not. on the contrary, it grants 230 immunity and treats them all the same. and it's not a bug, it's a feature of 230.

Attempting to legislate such a 'neutrality' and/or 'content-based' requirement for online platforms is not possible as it would be unconstitutional due to the 1st amendment.

argument 4: '230 means companies can never be sued'

230 only protects websites from being sued for content created by others. Websites can be sued for many other reasons, they are still being filed today, and result in the detriment of those who sue for free speech.

https://www.theverge.com/2020/5/27/21272066/social-media-bias-laura-loomer-larry-klayman-twitter-google-facebook-loss[1][2]

1: https://www.theverge.com/2020/5/27/21272066/social-media-bias-laura-loomer-larry-klayman-twitter-google-facebook-loss

2: https://www.theverge.com/2020/5/27/21272066/social-media-bias-laura-loomer-larry-klayman-twitter-google-facebook-loss

Argument 5: '230 is why big and powerful internet companies are big and powerful'

230 is not specific to large internet companies and applies to the entire internet. one could even say that 230 helps encourage competition, as the cost of running a website in a world without 230 would be very high.

moreover, giants such as facebook, twitter and google have army of lawyers and money to deal with lawsuits to be filed against them, whereas small companies do not have such facilities, so it benefits very small companies rather than big ones.

Argument 6: 'When traditional publishers make a mistake, they are responsible for that mistake. If Twitter becomes a traditional publisher, it will also be responsible'

230 is not about who you are, but what you do. traditional publishers are responsible for creating their own content. If Twitter creates its own content, it is also responsible. This applies to everyone, not just Twitter.

The conservatives most supportive of the removal/replacement of 230 are those who shouldn't support it at all. because if 230 is removed/changed, platforms like twitter will be held responsible for all content and will be sued for that content, so they will apply more censorship, delete more user accounts to avoid putting themselves in legal danger. It is not difficult to guess who will be censored and whose accounts will be deleted by looking at their political stance.

For example, right-wing social media app parler has that much discussed content thanks to section 230. If it is not 230, those contents will be deleted and users will be banned. so 230 actually works more for right than left.

Comment by relevantusername2020 at 20/01/2023 at 18:13 UTC

5 upvotes, 0 direct replies

dont have much to add other than to say its interesting since i was just reading about the history of section 230 last night

Comment by Orcwin at 20/01/2023 at 18:52 UTC

4 upvotes, 1 direct replies

I'm not versed in the US legal system, so I can't quite follow.

First you state

the plaintiffs are arguing for a narrow interpretation of 230

But the rest of the post, including the arguments and statements, seems to argue on the premise that this article 230 would be scrapped entirely.

Is that how US law works? Is it not possible to amend a law, but does it have to be scrapped entirely?

If amending laws *is* an option, then where does that leave us? You also wrote:

the plaintiffs have argued that Section 230 does not protect anyone who “recommends” content. Alternatively, they argue that Section 230 doesn’t protect algorithms that “recommend” content

Doesn't that imply the plaintiff is asking for the law to be amended to exclude recommendation algorithms? If so, can you explain how that affects us?

There's a lot I'm unclear on here, and a lot I might be misinterpreting. Hopefully you can clear up the confusion.

Comment by i_Killed_Reddit at 20/01/2023 at 19:27 UTC

15 upvotes, 2 direct replies

This is the most absurd thing if it moves ahead. Me or any of the others moderator's cannot be fully online 24/7 to be liable for any troll post/comment made in bad faith.

We have utilized the maximum capacity to our ability of u/automoderator, to filter spam words and Reddit TOS violations. However, still things do slip in and people do find loopholes to bypass the rules.

This is a voluntary work being done because we love our communities which we moderate, without any financial compensation.

If we are held responsible for stuff which we do not believe or agree on but are a bit late to moderate/remove, then it's better to quit moderation and save ourselves from this irl headache and legal battles for no fault of ours.

Comment by areastburn at 20/01/2023 at 20:06 UTC

9 upvotes, 4 direct replies

Are mods still unpaid? I just can’t wrap my head around any positive aspects of being a Reddit mod.

Comment by [deleted] at 20/01/2023 at 18:05 UTC*

15 upvotes, 2 direct replies

This got shared on AITA, despite being against AITA's rules. I reported it as such and got a warning for that. I guess reddit doesn't care about users so long as it gets the word out that it wants out.

Comment by Premyy_M at 21/01/2023 at 05:15 UTC

4 upvotes, 0 direct replies

If I post to my user and someone else comments doesn’t that effectively put me in a mod position. Despite being a completely unqualified novice, I’m now responsible. Doesn’t add up. It’s an incredibly high standard to impose on random internet users. Such standards don’t seem to exist in real life. All kinds of ppl are in positions to vote irresponsibly for their own gain. If they choose to do so they are not held responsible

Mod abuse can be problematic and internet activities that lead to real life events or threats need to be addressed but not with misplaced responsibility. The bad actors who commit the offence have no accountability for their actions?

Comment by culturepulse at 21/01/2023 at 12:42 UTC

4 upvotes, 0 direct replies

This is really well stated. An important question perhaps: Is Reddit "recommending" content, or is it "sorting" content? Going back to the earliest hot algorithms on the platform, reddit didn't really recommend it, it sorted it.

Facebook and Twitter (the real reasons for this suit as you kind of noted), are not doing the same (particularly FB) as Reddit because it centrally (not decentrally) controls the moderation-this is patently different (see our research with other social media sites: http://thecensorshipeffect.com/[1][2]) . The effective shadow banning is why there's an argument for editorialization that is hard to tiptoe around.

Naturally, all sites need a minimum of content moderation to not spread illegal materials, but I think the argument isn't really about that so much as its about political suppression of ideas by central systems of management.

You're right to fight :) Thank you for what you're doing! We're here to help.

1: http://thecensorshipeffect.com/

2: http://thecensorshipeffect.com/

Comment by darknep at 20/01/2023 at 19:06 UTC*

9 upvotes, 2 direct replies

As a moderator on Reddit, Section 230 has allowed my fellow moderator colleagues and I to enable the free flow of information on the site. It plays an extremely vital role in allowing users to choose content that suits them best, and also improves moderation over time by allowing moderators with different viewpoints on moderation as a whole to discuss what is best for our users.

Modifications to this act would drastically diminish crucial experiences that benefit people’s wellbeing. When I was younger, I turned to the internet to find places that accepted me for who I was, as I was struggling with my gender identity as well as being (at the time unknowingly) autistic. I found communities on Reddit where I could freely express myself and search for friends who understood me. People in these groups often spoke about rather taboo content such as self-harm and suicide in order to uplift and support one another. Modifications to this act would effectively censor helpful communities like these, stifling expression and free speech on the site due to moderators not knowing how the law would interpret the content. This would mean people would not be able to find communities that fit their identity, beliefs, or interests. Modifications to this section could lead to millions of users not being able to find places they belong to, or a place they can call home. If it weren’t for Reddit and Section 230, I wouldn’t be here. If Section 230 is modified, I fear other people in the future may not be able to experience finding a place of acceptance.

Comment by RealistH8er at 20/01/2023 at 20:13 UTC

3 upvotes, 0 direct replies

I hope that this applies and may be a good point to bring up. Facebook uses not only their policy enforcers but public user input not only in cases of content but also in banning users both from fb as a whole and from their marketplace and other sub-communities. There is currently a class action in the works against them for unfair bans, not giving a reason for it, and using reports by common users as a means to ban others. If this class action is successful, what would that mean to Section 230 in general? How would it effect Reddit also?