Comment by Snoos-Brother-Poo on 10/04/2018 at 20:52 UTC*

1045 upvotes, 6 direct replies (showing 6)

View submission: Reddit’s 2017 transparency report and suspect account findings

How did you determine which accounts were “suspicious”?

Edit: shortened the question.

Replies

Comment by spez at 10/04/2018 at 21:00 UTC

1248 upvotes, 21 direct replies

There were a number of signals: suspicious creation patterns, usage patterns (account sharing), voting collaboration, etc. We also corroborated our findings with public lists from other companies (e.g. Twitter).

Comment by Deto at 10/04/2018 at 21:00 UTC

6 upvotes, 0 direct replies

This is pretty important. I wonder what the estimated false-negative rate is on this? Maybe it's just really hard to detect fake counts that are properly set up (e.g., their traffic origin is hidden).

Comment by Jaredlong at 10/04/2018 at 21:20 UTC*

2 upvotes, 1 direct replies

I was hoping for the same thing, especially when most of the accounts have zero karma. My guess is that they tracked the source of the accounts. Same IP addresses maybe? And while I'm speculating, I'm willing to bet the zero-karma accounts were alts used for upvoting the other accounts, and mass downvoting other users. Being able to mass deploy 600 upvotes is an easy way to get something off new and onto rising or front.

Comment by meowmixyourmom at 10/04/2018 at 23:02 UTC

2 upvotes, 0 direct replies

Nice try putin

Comment by gizamo at 11/04/2018 at 05:05 UTC

1 upvotes, 0 direct replies

Nice try, ya suspicious account.

Comment by stefantalpalaru at 10/04/2018 at 21:04 UTC

-3 upvotes, 0 direct replies

This may be a dumb question, but how did you determine which accounts were “suspicious”?

They were questioning spez's access to the Reddit database after he edited some comments as a prank.