skip to Main Content
Welcome to Gimasys!
Hotline: +84 974 417 099 (HCM) | +84 987 682 505 (HN)

Lessons from Discord moving from Redshift to BigQuery

Author's note: We will hear from Discord – maker of a popular voice, video chat app for gaming. They must deliver great experiences to millions of customers simultaneously and keep up with evolving needs. And here's how they switched from Redshift to Google Cloud BigQuery to support their growth.

At Discord, our chat app supports more than 50 million monthly users. We've been using Amazon Redshift as our data warehouse solution for many years, but for both technical and business reasons, we've switched completely to BigQuery. Since the migration, we have been able to serve users faster, combine AI and ML capabilities, and ensure compliances.

Lessons from Discord's transition from Redshift to BigQuery 1

The challenges caused Discord to move

Our team at Discord started looking at alternatives when we realized we were running into technical and cost limitations on Redshift. We knew that if we wanted our data warehouse to scale with our business, we had to find a new solution. Technically, we realized that we would reach the maximum cluster size (128 compute servers) for DC2 class servers within six months due to our increasing usage. The cost of using Redshift also becomes a challenge. We were paying hundreds of thousands of dollars per month, not including storage and in/out network traffic costs between Google Cloud and AWS. (We have now used Google Cloud for my chat application.)

We looked at several Google Cloud-based solutions and determined that BigQuery would be the right solution for us at larger scale (with a known customer base beyond Discord's capabilities), close to home. contains our data and the fact that Google Cloud already has pipes in place to load the data. Another key reason for our choice of BigQuery is that it is completely “serverless” so it will not require any upfront hardware provisioning and management. We can also take advantage of a brand new feature called BigQuery Reservations to achieve significant savings with the use of fixed processing capacity.

Trade-offs and challenges in the transition process

We had some preparations to do before and during the move. An initial challenge is that while both Redshift and BigQuery are designed to handle analytics workloads, they are very different.

For example, in Redshift we had a denormalized set of tables where each of our application events ended up in its own table and most of our analytics queries needed to be joined. together. Running an analytics query on user retention will involve analyzing data across many different events and tables. So running this type of JOIN heavy workload leads to performance differences. We relied on big data batch ordering and quantities before, but that method works BigQuery Support with limitations. Redshift and BigQuery do partitioning differently, so hooking into something like a user ID won't be as fast, because the data layout is different. So we used timestamp partitioning and clustering on JOIN fields, which increased performance in BigQuery. Other aspects of BigQuery delivered significant immediate advantages, making the migration worthwhile. These include ease of management (one vendor vs. many, no maintenance windows, no VACUUM/ANALYZE); Ability of extension; and cost of performance.

There were a number of other considerations we took into account when making this move. We had to convert over a hundred thousand lines of SQL into BigQuery syntax, so we used the ZetaSQL library and the PostgreSQL parser to perform a conversion. To do this, we forked an open source parser and modified the grammar so it could parse all of our existing Redshift SQL. Building this is a non-trivial part of the move. This tool can pass an abstract syntax tree (also known as a parse tree) from a Redshift template and output the equivalent template to BigQuery. Additionally, we re-architected the way we build pre-aggregated data views to support BigQuery. Moving to a fixed capacity model using BigQuery Reservations enables workload isolation, consistent performance, and predictable costs. The final migration step was getting used to the new post-migration model and educating stakeholders about the new operating model.

“Moving from Redshift to BigQuery has changed the game for our organization. We were able to overcome performance bottlenecks and capacity constraints and fearlessly unlock actionable insights for our business.”

Spencer Aiello – Head of department and technical manager, machine learning at Discord

Lessons from Discord's migration from Redshift to BigQuery 2

Use BigQuery as our data platform

Since completing the migration, BigQuery has helped us accomplish our goals around scale, user privacy, and GDPR compliance. BigQuery now powers all of our reporting, dashboarding, machine learning, and data exploration use cases at Discord. Thousands of queries run on our archived data every day. We won't be able to scale our queries on Redshift like we can with BigQuery.

With BigQuery, we can keep operations running smoothly without any disruption to our business. This was a breath of fresh air at the end of our Redshift usage, where we once had over 12 hours of downtime just to do nightly maintenance. These operations can fail and cause us to slip above our internal SLAs by 24 hours before we can ingest data. To solve this challenge in the past, we had to start proactively deleting and truncating tables in Redshift, leading to incomplete and less accurate insights.

We've also seen other benefits of moving to BigQuery: User data requests have become cheaper and faster to service; Streamed insert BigQuery allows us to observe machine learning experiments and model results from the AI Platform in real time; and we can easily support new use cases for Discord's trust and secure usage, financial, and volume analytics. It's safe to say that BigQuery is the foundation for all analytics at Discord.

It's a huge benefit that we can now provide stable performance to users without having to worry about resource constraints. We can now support thousands of queries across hundreds of terabytes of data every day without having to think too much about resources. We can share access to analytics information across teams, and we're well prepared for the next step of using BigQuery's AI and ML capabilities.

Learn more about Discord and BigQuery.

Source: Gimasys

Back To Top
0974 417 099