Detecting and mitigating content manipulation on Reddit

https://www.reddit.com/r/RedditSafety/comments/b0a8he/detecting_and_mitigating_content_manipulation_on/

created by worstnerd on 12/03/2019 at 17:06 UTC

462 upvotes, 51 top-level comments (showing 25)

A few weeks ago we introduced this subreddit with the promise of starting to share more around our safety and security efforts. I wanted to get this out sooner...but I am worstnerd after all! In this post, I would like to share some data highlighting the results of our work to detect and mitigate content manipulation (posting spam, vote manipulation, information operations, etc).

At a high level, we have scaled up our proactive detection (i.e. before a report is filed) of accounts responsible for content manipulation on the site. Since the beginning of 2017 we have increased the number of accounts suspended for content manipulation by 238%, and today over 99% of those are suspended before a user report is filed (vs 29% in 2017)!

Compromised accounts (accounts that are accessed by malicious actors determining the password) are prime targets for spammers, vote buying services, and other content manipulators. We have reduced the impact by proactively scouring 3rd party password breach datasets for login credentials and forcing password resets of Reddit accounts with matching credentials to ensure hackers can’t execute an account takeover (“ATO”). We’ve also gotten better at detecting login bots (bots that try logging into accounts). Through measures like these, throughout the course of 2018, we reduced the successful ATO deployment rate (accounts that were successfully compromised and then used to vote/comment/post/etc) by 60%. We expect this number to grow more robust as we continue to implement more tooling. This is a measure of how quickly we detect compromised accounts, and thus their impact on the site. Additionally, we increased the number of accounts put into the force password reset by 490%. In 2019 we will be spending even more time working with users to improve account security.

While on the subject, three things you can do right now to keep your Reddit account secure:

1: https://www.reddithelp.com/en/categories/using-reddit/your-reddit-account/how-set-two-factor-authentication

Some of our more recent efforts have focused on reducing community interference (ie “brigading”). This includes efforts to mitigate (in real-time) vote brigading, targeted sabotage (Community A attempting to hijack the conversation in Community B), and general shitheadery. Recently we have been developing additional advanced mitigation capabilities. In the past 3 months we have reduced successful brigading in real-time by 50%. We are working with mods on further improvements and continue to beta test additional community tools (such as an ability to auto-collapse comments by users, which is being tested with a small number of communities for feedback). If you are a mod and would like to be considered for the beta test, reach out to us here[2].

2: https://www.reddit.com/message/compose?to=%2Fr%2Freddit.com&subject=Crowd%20Control%20Subreddit%20Request

We have more work to do, but we are encouraged by the progress. We are working on more cool projects and are looking forward to sharing the impact of them soon. We will stick around to answer questions for a little while, so fire away. Please recognize that in some cases we will be vague so as to not provide too many details to malicious actors.

Comments

Comment by [deleted] at 12/03/2019 at 17:37 UTC

62 upvotes, 1 direct replies

Cool, are you going to have any data on this to release? I'm sure it's a lot to ask but I'd love to know things like:

1. Where banned accounts originate

2. What subs do the most brigading

3. What you consider suspicious activity on an account/keep from banning real users?

4. Major peaks in misinformation or manipulation campaigns, tied to major events or news.

And so on. If the data can be made into graphs that would be amazing, but again I know it's a big ask. Even a few charts would make a lot of us happy I'm sure.

Comment by shiruken at 12/03/2019 at 17:19 UTC*

35 upvotes, 1 direct replies

What are you doing to mitigate brigading organized off the Reddit platform (e.g. Discord or *chan)?

Comment by [deleted] at 12/03/2019 at 17:11 UTC

27 upvotes, 4 direct replies

[deleted]

Comment by shiruken at 12/03/2019 at 17:20 UTC*

23 upvotes, 2 direct replies

Is "general shitheadery" a quantifiable metric? If so, what subreddits' users are the most generally shitheadery?

Comment by vswr at 12/03/2019 at 17:12 UTC

69 upvotes, 4 direct replies

general shitheadery

PLEASE tell me this was the exact term used in the conference room when you guys met to discuss details.

Comment by BeerJunky at 12/03/2019 at 17:40 UTC

36 upvotes, 1 direct replies

Glad to see efforts are being taken to make the site more secure (I'm a security person by trade so this warms my heart). Is there any plans to push the 2FA option a bit more? To be honest I don't I've seen it mentioned outside of this post and it's something that users should be heavily encouraged to use. I don't think the average user knows this feature exists and if they do know I don't think they are aware why they should be doing it.

Comment by Pyronic_Chaos at 12/03/2019 at 18:09 UTC

15 upvotes, 1 direct replies

content manipulation (posting spam, vote manipulation, information operations, etc).
Compromised accounts (accounts that are accessed by malicious actors determining the password) are prime targets for spammers, vote buying services, and other content manipulators.

I have actually seen a huge influx of spam from what are either compromised accounts or newly created accounts (~1-3months old, 0 karma) from webstores posing as users to promote their stores (namely Chinese webstores selling t-shirts and other do-dads). The typical MO is:

I do my part and use the 'report' feature every time I see this (about once a day, more or less), what other steps can I take? I know the mods of the individual subs do their parts and remove the links, but can Reddit Admins do anything to curb this behavior? It's just really annoying to see an ad for a webstore (which is a sketchy website anyway) get manipulated so high. If I report the post as "Spam", does this trigger a different system/action?

Comment by NombreGracioso at 12/03/2019 at 17:17 UTC

5 upvotes, 0 direct replies

Thanks for sharing the info and for your hard work! Also, I am curious... Could we know which subs are being used for testing those new features? Just to see if there is any noticeable difference...

Comment by ColorProgram at 12/03/2019 at 19:59 UTC

4 upvotes, 0 direct replies

I would just like to say thank you, and that I've noticed the changes since early 2017. Keep up the good work.

Can I ask if you can describe any broad, or micro-trends you have seen? What have brigades and vote manipulation been focussed on? Are there any common intentionalities you've noticed?

ie political narrative stifling, and/or framing contests

Comment by Unconfidence at 12/03/2019 at 21:24 UTC

9 upvotes, 1 direct replies

My question is simple.

If Tom Hanks has a reddit account, and a malicious actor takes it over, is that a....TOMATO?

Comment by diggitySC at 12/03/2019 at 21:08 UTC

5 upvotes, 0 direct replies

Will there be any historical analysis? (looking at previous subreddits history and if there was existing manipulation?)

Comment by [deleted] at 12/03/2019 at 17:42 UTC*

15 upvotes, 1 direct replies

[deleted]

Comment by [deleted] at 12/03/2019 at 17:24 UTC

7 upvotes, 1 direct replies

What is content manipulation?

Comment by DubTeeDub at 12/03/2019 at 17:17 UTC

18 upvotes, 9 direct replies

Some of our more recent efforts have focused on reducing community interference (ie “brigading”). This includes efforts to mitigate (in real-time) vote brigading, targeted sabotage (Community A attempting to hijack the conversation in Community B), and general shitheadery. Recently we have been developing additional advanced mitigation capabilities. In the past 3 months we have reduced successful brigading in real-time by 50%.

How exactly have you reduced brigading by 50%? Is that only on reporting links brought to your attention? What actions are taken to stop brigading and shitbaggery?

We are working with mods on further improvements and continue to beta test additional community tools (such as an ability to auto-collapse comments by users, which is being tested with a small number of communities for feedback). If you are a mod and would like to be considered for the beta test, reach out to us here.

I am very interested in this beta and would like to learn more.

--------------------------------------------------------------------------------

Also, while I have you here, can you guys do something about the anti-race mixing subreddits like r/AntiOilDrilling, r/AgainstSingleMothers, and r/Cringeanarchy?

They all continue to push racial slurs and hatespeech against minorities and of late are particularly targeting white women who date minorities.

Comment by 50calPeephole at 12/03/2019 at 18:22 UTC

4 upvotes, 1 direct replies

How do you separate community brigading against community engagement in a heated topic in another sub like, /r/news or /r/politics?

Comment by DeepSeaDiving at 12/03/2019 at 17:12 UTC

5 upvotes, 1 direct replies

Some communities have been speculating about this happening, after new posts or comments would be buried almost immediately regardless of content. Will you notify the affected communities, or publish those affected?

Comment by BelleAriel at 13/03/2019 at 23:38 UTC

2 upvotes, 0 direct replies

Proper anti-brigading tools would be nice instead of having to use saferbot which is a pain in the ass and causes a lot of drama for us mods, simply for trying to safeguard our community. I sent the admins a detailed E-mail to r/reddit.com re this (the trouble we were receiving at fuckthealtright) but have received nothing back. Would really appreciate a response. Thanks.

Comment by Orcwin at 13/03/2019 at 10:46 UTC

2 upvotes, 1 direct replies

Mods are probably in a position where we are more able to spot account takeovers more easily than others could. If someone starts placing spamlink-infested posts, where they didn't before, a take-over seems likely. Is there any way for us to report those to you, to provide you with more examples to improve your detection?

Comment by DesmondIsMolested at 12/03/2019 at 22:56 UTC

5 upvotes, 0 direct replies

Then when are you going to do something about subs dedicated to brigading?

/r/AgainstHateSubs

/r/ShitRedditSays

/r/TopMindsOfReddit

And subs that routinely call to brigade?

/r/ChapoTrapHouse /r/ChapoTrapHouse2

You're full of shit.

Comment by ani625 at 12/03/2019 at 18:14 UTC

2 upvotes, 0 direct replies

Here's hoping we'll finally get some tools to detect and prevent brigading.

Comment by [deleted] at 12/03/2019 at 20:58 UTC

3 upvotes, 0 direct replies

Would Mods breaking their own subreddit rules and acting in bad faith towards their community count as content manipulation? If it does, are there any plans as to handle reports based on bad faith Moderation?

Comment by LeCrushinator at 12/03/2019 at 23:59 UTC

2 upvotes, 1 direct replies

Will there be any effort to reduce bots and trolls in the run up to the 2020 election? I remember all kinds of new shady accounts in 2016 and remember thinking back then that maybe there should be some kind of requirement on membership before posting in political subreddits when it's near an election. Not sure if that would work, but I'm curious if you guys are taking any steps to mitigate it to prevent it from happening as much in 2020.

Comment by [deleted] at 13/03/2019 at 22:19 UTC

1 upvotes, 0 direct replies

If the community interference stuff is what keeps minimising random people's comments, it isn't working. Comments with as much as 700 upvotes, by accounts with perfectly fine histories that aren't brigading are being minimised and it's getting really annoying: example

​

EDIT: Looking at the account, it does actually seem suspicious as it had no posts for 5 years and then suddenly cam back posting about a company, however, this doesn't stop perfectly normal comments from getting hidden from view. Maybe a feature such as marking suspicious accounts next to their name would be more useful, as it lets people judge for themselves without the comments being completely hidden.

Comment by [deleted] at 13/03/2019 at 22:20 UTC

1 upvotes, 0 direct replies

What are you going to do about the power users blatantly abusing Reddit for profit?

Comment by Smooth_Yak2 at 22/03/2019 at 14:08 UTC

1 upvotes, 0 direct replies

end my life