The Rollout of Reputation Service

https://www.reddit.com/r/RedditEng/comments/nudfv1/the_rollout_of_reputation_service/

created by SussexPondPudding on 07/06/2021 at 14:08 UTC

47 upvotes, 3 top-level comments (showing 3)

Authors: Qikai Wu, Jerroyd Moore and Melissa Cole

Overview of Reputation Service

As the home for communities, one of the major responsibilities of Reddit is to maintain the health of our communities by empowering those who are good and contributing members. We quantify someone’s reputation within a Reddit community as their karma[1]. Whether they are an explicit member or not, a user’s karma within a community is an approximation of whether that user is a part of that community.

1: https://reddit.zendesk.com/hc/en-us/articles/204511829-What-is-karma-

Today, karma is simplistic. It’s an approximate reflection of upvotes in a particular community but not a 1:1 relationship. Under the hood, karma is stored with other attributes of users in a huge account table. Currently we have 555M karma attributes at ~93GB, and they are still growing over time, which makes it very difficult to introduce new karma-related features. In order to better expand how karma is earned, lost, and used on Reddit, it’s time for us to separate karma from other user attributes, and that’s why we want to introduce Reputation Service, an internal microservice.

​

https://preview.redd.it/a0a0k2rrnu371.png?width=1600&format=png&auto=webp&s=0eda20cfa74cf47069e187ccb455ff017b138e59

Reputation Service provides a central spot to store karma and add new types of karma or reputation. As the above graph shows, there are two workflows in the current Reputation Service. On the one hand, the karma is adjusted by reading vote events off Reddit’s Kafka event pipeline for vote events. On the other hand, downstream services fetch the karma from Reputation Service to make business decisions.

Rollout Process

As an important user signal, karma is widely used among Reddit services to better protect our communities. There are 9 different downstream services which need to read users’ karma from Reputation Service and the aggregated request rate is tens of thousands of requests per second. To minimize the impact for other services, we leveraged a two-phase rollout process.

Firstly, the karma changes were dual written to both the legacy account table and the new database in Reputation Service. After comparing the karma differences of random chosen users during a fixed period of time in both databases to verify the karma increment workflow works properly, we backfilled existing karma from the legacy table and converted them to the new schema.

Secondly, we started to enable the karma reads from downstream services. Due to the existing karma logic and the high request rate, we gradually rolled out Reputation Service in downstream services one by one, and have got a journey full of learnings.

​

https://preview.redd.it/okn27zcxnu371.png?width=781&format=png&auto=webp&s=f5358847623744068c6c3fb005b284ebf5f027d6

The above graph describes how the rollout rate moves with time. The rollout lasted for several weeks, and we gained lots of great experience including caching optimization, resource allocation, failure handling, etc. We will talk more about these in detail in the next section.

Learnings

Optimization of Caching Strategy

The optimization of caching strategy is one major challenge we revisited for multiple times during the whole rollout process.

items when the cache is full.

https://preview.redd.it/317n4k74ou371.png?width=512&format=png&auto=webp&s=82cb5f6cce3604e1f58cf9db9309e2f9abac2a5b

2: https://www.reddit.com/r/shittychangelog/comments/mqx6si/delete_karmas_old_caching_system_and_the_current/

Redis worked perfectly until the memory was full and evictions started to happen. Latency was observed while Redis evicted items, and while Redis with LRU is a common pattern, we were storing a billion items, and we needed to respond to requests in a matter of milliseconds (our p99 latency target is ≤50ms).

https://preview.redd.it/if0c3w1mou371.png?width=512&format=png&auto=webp&s=fd03681c275e3ec13ff318cb3319ab53fa7c36e3

As a result of this, we reintroduced ttl to the cache, and fine tuned it to make sure the Redis memory usage kept at a relatively constant level to avoid large-scale evictions and cache hit rate remained at a high percentage to control the load pressure on the database side. Our cache hit ratio decreased from 99% to 89%, while keeping our p99 latency below 30ms.

After going through all of the above stages and improvements, Reputation Service has reached a stable state to serve the many tens of thousands of requests per second.

Health Checks & Scaling Down Events

A mis-configured health check meant that when the Reputation Service was scaling down from high traffic periods, requests were still being routed to copies of the service that had already terminated. This added about 1,000 errors per downscaling event. While this added less than 0.00003% to the service’s overall error rate, patching this in our standard library[3], baseplate.py, will improve error rates for our other Reddit services.

3: https://github.com/reddit/baseplate.py/commit/c4a815d2729c3e232849b1e8079d3fb698f621ef

Failures Handling in Downstreaming Services

To guarantee the website could function properly during Reputation Service outages, each client makes business decisions without Karma separately, and most downstream systems fail gracefully when Reputation Service is unavailable, trying to minimize the impact on users. However, due to the retry logic in the clients, the thundering herd problem[4] could happen during the outage which makes Reputation Service more difficult to recover. To address this, we added a circuit breaker[5] for the client with the largest traffic, so that the traffic could be rate limited when an incident happens, allowing the Reputation Service to recover.

4: https://en.wikipedia.org/wiki/Thundering_herd_problem

5: https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern

Resource Allocation During Rollout

Another lesson we learned is to over provision the service during the rollout and address financial concerns later. When we first gradually scaled up the service according to the rollout rate, there were several small incidents happening due to the limit of resources. After we allocated enough resources to make sure cpu/memory usage didn't exceed 50% and the cluster had adequate spaces to auto-scale, we could focus more on other problems encountered during the rollout instead of always keeping an eye on the system resources usage. It helped expedite the overall process.

The Future

The rollout of Reputation Service is just a starting point. There are many opportunities to expand how karma is earned, lost, and used on Reddit. By further developing karma with Reputation Service, we can encourage good user behavior, discourage the bad, reduce moderator burden, make Reddit more safe, and reward brands for embracing Reddit. If this is something that interests you and you would like to join us, please check out our careers page[6] for a list of open positions.

6: https://www.redditinc.com/careers/#hiring

Comments

Comment by simmermayor at 09/06/2021 at 20:02 UTC

2 upvotes, 0 direct replies

Interesting read

Comment by MajorParadox at 07/06/2021 at 15:17 UTC

1 upvotes, 0 direct replies

Interesting! Would this eventually solve issues around renaming/changing case of subreddit and/or usernames? From what I understand, the way karma is stored today is what keeps such features from being added.

Comment by CryptoMaximalist at 09/06/2021 at 23:13 UTC

1 upvotes, 0 direct replies

Will any of this new reputation system or new karma types be exposed to users or mods through the API?