created by worstnerd on 19/09/2019 at 22:04 UTC*
5126 upvotes, 206 top-level comments (showing 25)
The concern of content manipulation on Reddit is as old as Reddit itself. Before there were subreddits (circa 2005), everyone saw the same content and we were primarily concerned with spam and vote manipulation. As we grew in scale and introduced subreddits, we had to become more sophisticated in our detection and mitigation of these issues. The creation of subreddits also created new threats, with “brigading” becoming a more common occurrence (even if rarely defined). Today, we are not only dealing with growth hackers, bots, and your typical shitheadery, but we have to worry about more advanced threats, such as state actors interested in interfering with elections and inflaming social divisions. This represents an evolution in content manipulation, not only on Reddit, but across the internet. These advanced adversaries have resources far larger than a typical spammer. However, as with early days at Reddit, we are committed to combating this threat, while better empowering users and moderators to minimize exposure to inauthentic or manipulated content.
Our strategy has been to focus on fundamentals and double down on things that have protected our platform in the past (including the 2016 election[1]). Influence campaigns represent an evolution in content manipulation, not something fundamentally new. This means that these campaigns are built on top of some of the same tactics as historical manipulators (certainly with their own flavor). Namely, compromised accounts, vote manipulation, and inauthentic community engagement. This is why we have hardened our protections against these types of issues on the site.
This year alone, we have taken preventative actions on over 10.6M accounts with compromised login credentials (check yo’ self[2]), or accounts that have been hit by bots attempting to breach them. This is important because compromised accounts can be used to gain immediate credibility on the site, and to quickly scale up a content attack on the site (yes, even that throwaway account with password = Password! is a potential threat!).
2: https://haveibeenpwned.com/
The purpose of our anti-cheating rules is to make it difficult for a person to unduly impact the votes on a particular piece of content. These rules, along with user downvotes (because you know bad content when you see it), are some of the most powerful protections we have to ensure that misinformation and low quality content doesn’t get much traction on Reddit. We have strengthened these protections (in ways we can’t fully share without giving away the secret sauce). As a result, we have reduced the visibility of vote manipulated content by 20% over the last 12 months.
Content manipulation is a term we use to combine things like spam, community interference, etc. We have completely overhauled how we handle these issues, including a stronger focus on proactive detection, and machine learning to help surface clusters of bad accounts. With our newer methods, we can make improvements in detection more quickly and ensure that we are more complete in taking down all accounts that are connected to any attempt. We removed over 900% more policy violating content in the first half of 2019 than the same period in 2018, and 99% of that was before it was reported by users.
Outside of admin-level detection and mitigation, we recognize that a large part of what has kept the content on Reddit authentic is the users and moderators. In our 2017 transparency report[3] we highlighted the relatively small impact that Russian trolls had on the site. 71% of the trolls had 0 karma or less! This is a direct consequence of you all, and we want to continue to empower you to play a strong role in the Reddit ecosystem. We are investing in a safety product team that will build improved safety (user and content) features on the site. We are still staffing this up, but we hope to deliver new features soon (including Crowd Control, which we are in the process of refining thanks to the good feedback from our alpha testers). These features will start to provide users and moderators better information and control over the type of content that is seen.
The next component of this battle is the collaborative aspect. As a consequence of the large resources available to state-backed adversaries and their nefarious goals, it is important to recognize that this fight is not one that Reddit faces alone. In combating these advanced adversaries, we will collaborate with other players in this space, including law enforcement, and other platforms. By working with these groups, we can better investigate threats as they occur on Reddit.
These adversaries are more advanced than previous ones, but we are committed to ensuring that Reddit content is free from manipulation. At times, some of our efforts may seem heavy handed (forcing password resets), and other times they may be more opaque, but know that behind the scenes we are working hard on these problems. In order to provide additional transparency around our actions, we will publish a narrow scope security-report each quarter. This will focus on actions surrounding content manipulation and account security (note, it will not include any of the information on legal requests and day-to-day content policy removals, as these will continue to be released annually in our Transparency Report[4]). We will get our first one out in October. If there is specific information you’d like or questions you have, let us know in the comments below.
4: https://www.redditinc.com/policies/transparency-report-2018
[EDIT: Im signing off, thank you all for the great questions and feedback. I'll check back in on this occasionally and try to reply as much as feasible.]
Comment by [deleted] at 19/09/2019 at 22:14 UTC
200 upvotes, 4 direct replies
[deleted]
Comment by LargeSnorlax at 19/09/2019 at 22:58 UTC
75 upvotes, 4 direct replies
Vote Manipulation
Alright, so, you say you've "reduced the visibility" of vote manipulated content by 20% - But the amount of replies I've received from www.reddit.com/report has gone down by 90%+.
I used to write up very detailed tickets - Some I detailed extensively, like in this post[1] because there is a TON of astroturfing and multiple account/vote manipulation stuff on the site. I wrote up a dozen pages of information.
1: https://www.reddit.com/r/ModSupport/comments/c8f6s1/user_account_ring_astroturfing_and_multiple/
This was very commonly done because once, admins actioned these kinds of things - Has there been some sort of change in policy where nothing is done with the accounts once sent in, and there is no longer a reason to respond to any of the tickets?
I've wrote up *hundreds* of bought and purchased accounts into complex tickets - Never received any actual responses.
I understand Vote Manipulation is a tricky subject. I understand it takes time. I just wish the time spent on the *moderator* end of things was acknowledged by, at the very least, a reply, saying "Yes", or "No".
It's very disheartening seeing problematic behaviour continue, day after day, and know that it doesn't matter how much effort is put into documenting it, that nothing will be done.
This isn't to put the admins on blast, but I **do** remember days where I would send in Vote Manipulation tickets, and it might take a couple days for a response, but I'd get a "yes", or a "no" answer, and could log it properly. Nowadays, I just send in a ticket with minimal information, knowing it won't get actioned anyways, just as sort of a placebo, because I know the manipulation is happening, just it seems admins don't care about it.
Comment by Halaku at 19/09/2019 at 22:17 UTC
34 upvotes, 2 direct replies
We are investing in a safety product team that will build improved safety (user and content) features on the site. We are still staffing this up, but we hope to deliver new features soon (including Crowd Control, which we are in the process of refining thanks to the good feedback from our alpha testers).
Does this mean that y'all are hiring new people for this team, or are these employees reallocated / additionally tasked that y'all already have on board?
Comment by DerekSavoc at 20/09/2019 at 00:51 UTC*
28 upvotes, 3 direct replies
If you look at reddit's content policy[1] you will see that content which "Encourages or incites violence" is prohibited. If you follow that link it takes you to this page[2], which details what content meets these criteria. That page also says that "To report Violent Content, please visit this page[3].". Here are the options for things you can report on that page.
1: https://www.redditinc.com/policies/content-policy
3: https://www.reddit.com/report
This is spam
This is abusive or harassing
It infringes my copyright
It infringes my trademark rights
It's personal and confidential information
It's sexual or suggestive content involving minors
It's involuntary pornography
It's ban evasion
It's vote manipulation
It's a transaction for prohibited goods or services
It impersonates me
Report this content under NetzDG
It's threatening self-harm or suicide
​
You will notice that the option "It encourages or incites violence." is not in this section. In fact of all the things explicitly listed as being prohibited the only two that the page doesn't show in that list are, illegal content, and content that encourages or incites violence. For illegal content you could hypothetically report it to the police and they could contact reddit about it through reddit's law enforcement inquiries section. But to report content to the admins that breaks the content policy by encouraging or inciting violence there isn’t an obvious way to do it if you haven’t had it explained to you.
"It threatens violence or physical harm" only seems to cover content that "calls for violence or physical harm" not content that "encourages, glorifies, incites" violence or physical harm. Threats are direct, a post talking about how "Muslims are ruining America, someone should find a final solution” are not direct threats, but they definitely encourage and could incite violence or physical harm.
We all know what the term stochastic terrorism means, most of us didn’t three years ago. Things have changed. Their needs to be a better way to report this content.
I have been told that the option in the submenu is the proper option to use, but all of this seems needlessly confusing.
Is there any plan to redesign and integrate this system into the main site so that this kind of concerning content is easier to report?
The silence speaks volumes /u/worstnerd
Comment by wampastompah at 19/09/2019 at 22:45 UTC
24 upvotes, 4 direct replies
Thanks for the update! I really don't envy you the task of hunting down these accounts/bots.
Though there's one thing that I think could be made clearer. You said that the effects of Russian trolls in 2017 was minimal, and yet you say that you're constantly improving detection algorithms. Have you gone back over the 2017 data with the new algorithms to recheck those numbers?
I often see posts that claim that Reddit does not have a bot/troll problem and that it's just paranoia to bring up the idea that people are manipulating content on Reddit. While I understand why you may not want to make a statement like this, I think it would help transparency if someone from Reddit would say, "Yes, we have some issues with Russian bots and trolls." and give some stats on how pervasive they actually are in various subreddits, given the new tools and detection algorithms you have.
Comment by kaptainkeel at 19/09/2019 at 22:53 UTC
11 upvotes, 1 direct replies
Regarding karma-farming accounts. Can you tell a little about what these are typically used for? Are they used by malicious actors, or more just to post hidden advertisements?
For example, I've noticed a huge influx of accounts in the past 2-3 months that repost previously top-rated posts. Then (typically) in the comment section, there will be another account that posts the top comment from the previous top thread. The majority of the time these are older accounts (6+ months minimum, but often over 2-3 years and sometimes as long as 6+ years) that have a gap between content. They'll post seemingly normal stuff for a while, then there's a gap of a few months or even years, then a massive amount of postings, typically of previous top pictures/videos/articles, and also typically cross-posting the same thing to multiple subreddits. I pointed out one example here[1], although it seems those accounts are no longer around.
1: https://www.reddit.com/r/AskReddit/comments/cy9d9b/what_screams_im_uneducated/eyrnhfv/?context=3
Comment by [deleted] at 19/09/2019 at 23:52 UTC
11 upvotes, 3 direct replies
[deleted]
Comment by [deleted] at 19/09/2019 at 22:56 UTC
8 upvotes, 2 direct replies
I've seen hundreds of accounts from 1-12 years olds that are suddenly posting spam for tshirts/mugs/alibaba dropship items or posing as women in various NSFW subreddits trying to get men to pay them money. For the tshirt spammers they usually leave the history intact, for the porn catfishers they usually wipe it. Many of these have posts that are obviously being vote manipulated up.
Do you know what percentage of compromised accounts are being used for political purposes vs. being used to spam/scam for money?
Comment by Beard_of_Valor at 19/09/2019 at 22:54 UTC
8 upvotes, 3 direct replies
These trolls often buy accounts. Particularly the US based campaign-aligned ones rather than foreign adversaries. These accounts have high karma in various subs, and are scrubbed of history and reused. This has never seemed to be curtailed. The account changing passwords and having its history blown out seem to be good indicators something weird is happening, information a system could consume and alert on.
In other threads other common patterns of bad behavior have been identified. They can be systematically identified.
Why is it so easy to do the same wrong stuff? I work in healthcare IT. In aware of how complex enterprise tech can be. I'm aware of how fraud, waste, and abuse is an arms race as each side figures the other out. But you haven't ratcheted forward. Not once has a plan seemed to fail today that succeeded yesterday in a permanent way. It's not like you've been cut and continue to bleed. It's like you've been cut and have continued to pull the knife through your flesh.
Comment by SequesterMe at 19/09/2019 at 23:22 UTC
6 upvotes, 4 direct replies
TL;DR: I think bot's are being used to target and downvote the posts and comments of certain people based on how prior comments have irritated the people that control the bots.
I'm fairly certain that sometimes users like myself are targeted for downvotes. Originally it seemed that it was just tards that would go in and downvote most any post I did because I'd pissed them off on some comment or post. It happened to a whole slew of posts that I had recently made all at one time. That crap still happens but it's to be expected. Then it seems the bots came in.
I could watch a couple of upvotes happen on a post and then a blast of downvotes and then a couple of periodic upvotes if not stagnation. Then it got more sophisticated. Each post was always at 1 or 0. You see, when you get downvotes at least you can say someone noticed. However, if it looks like no one votes at all then a user gets disheartened and leaves the discussion. I've looked at a couple other users history and seen the same behavior.
There have been times I've seen a whole slew of 0 totals on a whole series of responses to a particular post. I couldn't see a rhyme or reason to the voting pattern so I figured it wasn't a tard going all downvote wild on the particular post. However, I saw it happen all over the place and now believe it's more likely that a bot had been configured incorrectly.
I don't paranoid often but this seems real. Could you look into it?
Comment by PMaggieKC at 20/09/2019 at 01:04 UTC
7 upvotes, 1 direct replies
I have a concern about content manipulation. The subreddit r/muacirclejerk is being policed by the mods of r/MakeupAddiction. If you want backstory and screenshots I have them, but a mod at MUA went on alt accounts to harass an MUACJ member. This mod admitted it but refused to step down. MUA mods started mass banning any members that also commented on MUACJ (they also admitted this) and now are policing the circlejerk sub. Circlejerk subs parody the main sub and link to the original content. MUACJ can no longer link to original content or posts get removed. An MUA mod is obviously in a head mod’s pocket. Dozens, possibly hundreds of complaints about MUA mods have been submitted and nothing has been done. I’ve personally been harassed by them, nothing happened when I complained and submitted proof. This is blatant brigading and content manipulation, is the makeup community just not important enough for any action to be taken?
Edit: during the mass banning over 30k members were banned or unsubbed. The mods response to this (again, I have screenshots) was literally “Boo fucking hoo.” These are people you are supporting by your inaction.
Comment by CthuIhu at 20/09/2019 at 04:07 UTC
7 upvotes, 1 direct replies
Just waiting for another website to occupy the husk of integrity that reddit used to own. Now if you'll excuse me, I have 100 fucking sponsored advertisements to sift through in order to find the content I actually want. Thanks for basically nothing. You are digg 3.0. You will absolutely fall to the ground, and it shouldn't take too long, because you left your integrity at the door along with your crocs, or your birkenstocks, or whatever douchebag profiteering assholes wear these days
Suck my balls
Comment by CX52J at 19/09/2019 at 23:19 UTC
7 upvotes, 1 direct replies
Can you do something about all the T-shirt spamming. It’s using stolen content and a scam and there is an excessive amount of it.
Comment by BearAndBullWhisperer at 19/09/2019 at 23:57 UTC
7 upvotes, 0 direct replies
Is this update only relating to events such as political elections?
​
If not, I feel that this can be implemented in other areas of reddit, such as subreddits.
​
I've noticed some subreddits are being misused to manipulate content. One example is the subreddit r / btc. The letters "btc" symbolize the units of Bitcoin. However, that subreddit seems to have been taken over by individuals that are only in favour of bitcoin's direct competitor, which is bitcoin cash. Anything you ask or post in the subreddit will only be positive if it is positively reflecting its competitor. I understand this is a freedom of positing thing, but it becomes a bit dangerous to newcomers when they are actually using their hard earned money to buy something, which isn't what they are buying. It can be easily confused due to the content manipulation.
​
How do you avoid this from happening with other subreddits or examples. Such as roundearthers subreddits being taken over by flatearthers subreddit. Or antidepression being taken over by emorocks.
​
Anyways sorry for bringing this random topic up, but I thought it might be worth bringing it to your attention.
Comment by [deleted] at 20/09/2019 at 05:02 UTC
6 upvotes, 1 direct replies
[deleted]
Comment by Realtrain at 19/09/2019 at 22:38 UTC
16 upvotes, 2 direct replies
Are there any tools you can give moderators to help find issues of vote manipulation? I've been having issues on my small subreddit, but it looks like nothing has been done when I've reported it. So I just have to listed to users complain without doing anything.
Comment by grublets at 19/09/2019 at 22:38 UTC
14 upvotes, 1 direct replies
I leaned a new word today: “shitheadery”.
Comment by [deleted] at 19/09/2019 at 22:46 UTC
5 upvotes, 0 direct replies
What I wouldn’t mind seeing is stricter rules on manipulation of titles on articles that create misleading narratives
Comment by Zauberer-IMDB at 19/09/2019 at 22:51 UTC
5 upvotes, 2 direct replies
What kind of training, if any, are you providing the mom and pop mods running subreddits around the site, generally, and mods on the big target subreddits in particular, like /r/politics?
Comment by gill__gill at 20/09/2019 at 00:02 UTC
5 upvotes, 1 direct replies
Have you guys looked into Mods pinning their normal comments? Isn't that vote manipulation? I personally believe Reddit needs a main head base, which looks over Reddit as a whole, where you can report mods or things that don't necessarily get taken care of. The mail admins thing seems inefficient and not everyone is satisfied with the support
How does this sound?
Comment by [deleted] at 20/09/2019 at 02:49 UTC*
6 upvotes, 2 direct replies
[deleted]
Comment by KanyeWesleySnipes at 20/09/2019 at 04:31 UTC
5 upvotes, 1 direct replies
Why is black people twitter allowed to ban people from commenting for being white. It’s not like it’s just black people they specifically ban white people. I don’t get how that is okay.
Comment by juice16 at 19/09/2019 at 22:51 UTC
13 upvotes, 5 direct replies
Hello Canadian here,
As you may know there is an election in Canada happening in just over a month. I’ve been reading many concerns from other fellow Canadians about political content manipulation in r/Canada by the mod team on that subreddit. What actions can you or your team do insure our elections are safe from manipulation from the mod team on r/Canada?
Thanks,
Juice16
Comment by PokeCraft4615 at 19/09/2019 at 22:10 UTC
20 upvotes, 1 direct replies
Thank you for the update!
Comment by Brotherman_1 at 19/09/2019 at 22:37 UTC
10 upvotes, 4 direct replies
Are you ever going to do anything about false DMCA claims? Or just to lazy just easier to shut a sub down?