{"id":22238,"date":"2025-04-15T10:57:56","date_gmt":"2025-04-15T03:57:56","guid":{"rendered":"https:\/\/gcloudvn.com\/?p=22238"},"modified":"2025-04-23T11:04:45","modified_gmt":"2025-04-23T04:04:45","slug":"colossus-under-the-hood-how-we-deliver-ssd-performance-at-hdd-prices","status":"publish","type":"post","link":"https:\/\/gcloudvn.com\/en\/kienthuc\/colossus-under-the-hood-how-we-deliver-ssd-performance-at-hdd-prices\/","title":{"rendered":"Colossus under the hood: How we deliver SSD performance at HDD prices"},"content":{"rendered":"<section class=\"wpb-content-wrapper\"><div class=\"vc_row wpb_row vc_row-fluid\"><div class=\"wpb_column vc_column_container vc_col-sm-12\"><div class=\"vc_column-inner\"><div class=\"wpb_wrapper\">\n\t<div class=\"wpb_text_column wpb_content_element\" >\n\t\t<div class=\"wpb_wrapper\">\n\t\t\t<p><span style=\"font-weight: 400;\">From YouTube and Gmail to BigQuery and Cloud Storage, almost all of Google\u2019s products depend on Colossus, our foundational distributed storage system. As Google\u2019s universal storage platform, Colossus achieves throughput that rivals or exceeds the best parallel file systems, has the management and scale of an object storage system, and an easy-to-use programming model that\u2019s used by all Google teams. Moreover, it does all this while serving the needs of products with incredibly diverse requirements, be it scale, affordability, throughput, or latency.<\/span><\/p>\n<table>\n<tbody>\n<tr>\n<td><b>Application<\/b><\/td>\n<td><b>I\/O sizes<\/b><\/td>\n<td><b>Expected performance<\/b><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">BigQuery scans<\/span><\/td>\n<td><span style=\"font-weight: 400;\">hundreds of KBs to tens of MBs<\/span><\/td>\n<td><span style=\"font-weight: 400;\">TB\/s<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Cloud Storage - standard<\/span><\/td>\n<td><span style=\"font-weight: 400;\">KBs to tens of MBs<\/span><\/td>\n<td><span style=\"font-weight: 400;\">100s of milliseconds<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Gmail messages<\/span><\/td>\n<td><span style=\"font-weight: 400;\">less than hundreds of KBs<\/span><\/td>\n<td><span style=\"font-weight: 400;\">10s of milliseconds<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Gmail attachments<\/span><\/td>\n<td><span style=\"font-weight: 400;\">KBs to MBs<\/span><\/td>\n<td><span style=\"font-weight: 400;\">seconds<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">Hyperdisk reads<\/span><\/td>\n<td><span style=\"font-weight: 400;\">KBs to hundreds of KBs<\/span><\/td>\n<td><span style=\"font-weight: 400;\">&lt;1 ms<\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"font-weight: 400;\">YouTube video storage<\/span><\/td>\n<td><span style=\"font-weight: 400;\">MB<\/span><\/td>\n<td><span style=\"font-weight: 400;\">seconds<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span style=\"font-weight: 400;\">Colossus\u2019 flexibility shows up in a number of publicly available Google Cloud products. Hyperdisk ML utilizes Colossus solid state disk (SSD) to support 2,500 nodes reading at 1.2 TB\/s \u2014 impressive scalability. Spanner uses Colossus to address cheap HDD storage with super-fast SSD storage in the same filesystem, the foundation of its tiered storage feature. Cloud Storage uses Colossus SSD caching to deliver the cheapest storage while still supporting the intensive I\/O of demanding AI\/ML applications. Finally, BigQuery\u2019s Colossus-based storage provides super-fast I\/O to extra-large queries.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We last wrote about Colossus some time ago and wanted to give you some insights on how its capabilities support Google Cloud\u2019s changing business and what new capabilities we\u2019ve added, specifically around support for SSD.<\/span><\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_80 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewbox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewbox=\"0 0 24 24\" version=\"1.2\" baseprofile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/gcloudvn.com\/en\/kienthuc\/colossus-under-the-hood-how-we-deliver-ssd-performance-at-hdd-prices\/#Tong_quan_ve_Colossus\" >Colossus background<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/gcloudvn.com\/en\/kienthuc\/colossus-under-the-hood-how-we-deliver-ssd-performance-at-hdd-prices\/#Co_gi_moi_trong_viec_dinh_vi_SSD_cua_Colossus\" >What\u2019s new in Colossus SSD placement?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/gcloudvn.com\/en\/kienthuc\/colossus-under-the-hood-how-we-deliver-ssd-performance-at-hdd-prices\/#Bo_nho_dem_doc_L4\" >L4 read caching<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/gcloudvn.com\/en\/kienthuc\/colossus-under-the-hood-how-we-deliver-ssd-performance-at-hdd-prices\/#L4_writeback_cho_Colossus\" >L4 writeback for Colossus<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/gcloudvn.com\/en\/kienthuc\/colossus-under-the-hood-how-we-deliver-ssd-performance-at-hdd-prices\/#Ket_luan\" >Conclusion<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Tong_quan_ve_Colossus\"><\/span><b>Colossus background<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">But first, here\u2019s a little background on Colossus:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Colossus is an evolution of the Google File System (GFS).<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The traditional Colossus filesystem is contained in a single datacenter.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Colossus simplified the GFS programming model to an append-only storage system that combines file systems\u2019 familiar programming interface with the scalability of object storage.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">The Colossus metadata service is made up of<\/span>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">\u201ccurators\u201d that deal with interactive control operations like file creation and deletion<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">\u201ccustodians,\u201d which maintain the durability and availability of data as well as disk-space balancing.<\/span><\/li>\n<\/ul>\n<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Colossus clients interact with curators for metadata and then directly store data on \u201cD servers,\u201d which host its HDDs or SSDs.<\/span><\/li>\n<\/ul>\n<p><a href=\"https:\/\/gcloudvn.com\/en\/kienthuc\/accelerate-ai-ml-workloads-using-cloud-storage-hierarchical-namespace\/attachment\/thang-72024-2025-04-15t102223-945\/\" rel=\"attachment wp-att-22230\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-22230\" src=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102223.945.jpg\" alt=\"\" width=\"600\" height=\"375\" srcset=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102223.945.jpg 600w, https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102223.945-18x12.jpg 18w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><\/p>\n<p><span style=\"font-weight: 400;\">Colossus is also a zonal product. Google has built a single Colossus file system for each cluster, an internal building block of a Google Cloud region. Most data centers have one cluster and therefore one Colossus file system, regardless of how many workloads run within that cluster. Many Colossus file systems have multi-exabyte capacities, including two separate file systems that exceed 10 exabytes each. This high scalability ensures that even the most demanding applications do not run out of disk space near the cluster\u2019s compute resources within a region.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">These demanding applications also require a large amount of IOPS and throughput. In fact, some of Google\u2019s largest file systems regularly exceed 50 TB\/s read throughput and 25 TB\/s write throughput. That\u2019s enough throughput to transfer over 100 full-length 8K movies per second!<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Google doesn\u2019t rely solely on Colossus to support large streaming I\/Os, either. Many applications perform small log writes or small random reads. Their busiest cluster delivered over 600 million IOPS, including both reads and writes.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Of course, to get that much performance, you need to get the data to the right place. It\u2019s hard to read at 50TB\/s if all your data is on slow drives. This leads to two key new improvements in Colossus: SSD caching and SSD data locality, both of which are powered by a system Google calls \u201cL4.\u201d<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Co_gi_moi_trong_viec_dinh_vi_SSD_cua_Colossus\"><\/span><b>What\u2019s new in Colossus SSD placement?<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">In a previous blog post about Colossus, Google mentioned how they put the \u201chottest\u201d data on SSDs and balance the rest of the data across all the devices in the cluster. This is even more important today, as SSDs become more affordable and their role in Google\u2019s data centers increases. No storage designer builds a system entirely out of HDDs anymore.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">However, SSD-only storage is still significantly more expensive than a hybrid SSD-HDD storage system. The challenge is to put the right data\u2014the data with the most I\/O accesses or the lowest latency requirements\u2014on SSDs, while keeping the majority of data on HDDs.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">So how does Colossus determine the most relevant data?<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><span style=\"font-weight: 400;\"> Colossus has several ways to choose the right data to put on the SSD:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Force the system to put data on SSD: Here, a Colossus internal user can force the system to save data to SSD using the path:<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span> <span style=\"font-weight: 400;\">\/cns\/ex\/home\/leg\/partition=ssd\/myfile<\/span><span style=\"font-weight: 400;\">.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><span style=\"font-weight: 400;\"> This is the simplest way and ensures that all files are stored on the SSD. However, it is also the most expensive option.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Using hybrid placement: More experienced users can take advantage of \u201chybrid placement\u201d and tell the Colossus system to place only one copy on the SSD using the path:<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span> <span style=\"font-weight: 400;\">\/cns\/ex\/home\/leg\/partition=ssd.1\/myfile<\/span><span style=\"font-weight: 400;\">.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><span style=\"font-weight: 400;\"> This is a more cost-effective method, but if server D has an SSD replica that is unavailable, data access will be affected by HDD latency.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Using L4: For the majority of data at Google, most developers use L4 distributed SSD caching technology, which automatically selects the most appropriate data to place on the SSD.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<h2><span class=\"ez-toc-section\" id=\"Bo_nho_dem_doc_L4\"><\/span><b>L4 read caching<\/b><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><span style=\"font-weight: 400;\">The L4 distributed SSD cache analyzes the application's access patterns and automatically places the most appropriate data on the SSD. When acting as a read cache, the L4 index servers maintain a distributed read cache:<\/span><\/p>\n<p><a href=\"https:\/\/gcloudvn.com\/en\/kienthuc\/accelerate-ai-ml-workloads-using-cloud-storage-hierarchical-namespace\/attachment\/thang-72024-2025-04-15t102251-899\/\" rel=\"attachment wp-att-22229\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-22229\" src=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102251.899.jpg\" alt=\"\" width=\"600\" height=\"375\" srcset=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102251.899.jpg 600w, https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102251.899-18x12.jpg 18w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><\/p>\n<p><span style=\"font-weight: 400;\">This means that when an application wants to read data, it first checks with an L4 index server. This index server tells the application whether the data is in the cache. If it is, the application reads the data from one or more SSDs. If not, the cache reports a cache miss, and the application fetches the data from the drive where Colossus placed it.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">When a cache miss occurs, the L4 may decide to insert the accessed data into the SSD cache. It does this by requesting an SSD host to transfer the data from the HDD host. When the cache is full, the L4 will remove some items from the cache to free up space for new data.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">L4 can adjust how \u201caggressive\u201d it is in placing data on SSDs. Google uses a Machine Learning (ML)-based algorithm to decide the right policy for each workload: put data into the L4 cache as soon as it is written, after the first read, or after a second read in a short period of time.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This approach works well for applications that frequently read the same data, greatly improving IOPS performance and bandwidth. However, it has one major drawback: Google still writes new data to the HDD. In fact, there are other important types of data for which L4 read caching doesn\u2019t save as much resources as it would like, namely data that is written, read, and deleted quickly (such as intermediate results for a large batch job), and database transaction logs and other files with many small appends. Neither of these types of workloads are well suited to HDDs, so it\u2019s better to write them directly to SSDs and skip the HDD altogether.<\/span><\/p>\n<h3><span class=\"ez-toc-section\" id=\"L4_writeback_cho_Colossus\"><\/span><b>L4 writeback for Colossus<\/b><span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Now, imagine that an internal user of Colossus wants to put some of their data on an SSD. They need to think carefully about which files should be on the SSD and how much SSD storage they need to purchase for their workload. If they have old files that are no longer being accessed, they may want to move that data from the SSD to the HDD. But Google knows from observing users that deciding on these parameters is difficult. To help users, Google has also improved the L4 service to automate this task.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><a href=\"https:\/\/gcloudvn.com\/en\/kienthuc\/accelerate-ai-ml-workloads-using-cloud-storage-hierarchical-namespace\/attachment\/thang-72024-2025-04-15t102316-932\/\" rel=\"attachment wp-att-22228\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-22228\" src=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102316.932.jpg\" alt=\"\" width=\"600\" height=\"375\" srcset=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102316.932.jpg 600w, https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102316.932-18x12.jpg 18w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a>When used as a writeback cache, the L4 service advises Colossus administrators on whether and for how long to place a new file on the SSD. This is a complex issue! When a file is created, Colossus only knows the application that created the file and the file name \u2014 it cannot know for sure how the file will be used.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">To solve this problem, Google uses the same approach as the L4 read cache described in the CacheSack paper mentioned earlier. The application provides L4 with characteristics such as the file type or metadata about the database column containing the data. L4 uses these characteristics to classify files into \u201cbuckets\u201d and observes the I\/O patterns of each bucket over time. These I\/O patterns are then used to run online simulations with different storage policies, such as \u201cstorage on SSD for one hour,\u201d \u201cstorage on SSD for two hours,\u201d or \u201cno storage on SSD.\u201d Based on the simulation results, L4 selects the optimal policy for each bucket of data.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">These online simulations also serve another important purpose: they predict how L4 will allocate data if there is more or less SSD capacity. This allows Google to calculate how much I\/O can be offloaded from HDD to SSD at different SSD capacities. This information helps guide purchases of new SSD hardware and helps planners adjust SSD capacity across applications to optimize performance.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">When instructed, Colossus administrators can instruct the system to save new files on the SSD instead of the default HDD. After a certain period of time, if the file still exists, the administrator will move the data from the SSD to the HDD.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><a href=\"https:\/\/gcloudvn.com\/en\/kienthuc\/accelerate-ai-ml-workloads-using-cloud-storage-hierarchical-namespace\/attachment\/thang-72024-2025-04-15t102340-046\/\" rel=\"attachment wp-att-22227\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-22227\" src=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102340.046.jpg\" alt=\"\" width=\"600\" height=\"375\" srcset=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102340.046.jpg 600w, https:\/\/gcloudvn.com\/wp-content\/uploads\/2025\/04\/Thang-72024-2025-04-15T102340.046-18x12.jpg 18w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a>When L4's simulation system accurately predicts file access patterns, Google places only a small portion of the data on SSDs. These SSDs absorb the majority of reads (which typically occur on newly created files) before moving the data to cheaper storage, reducing overall costs.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the ideal scenario, the file would be deleted before Google moved it to the HDD, thereby avoiding any I\/O operations on the HDD altogether.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Ket_luan\"><\/span>Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>In short, Colossus, combined with the infrastructure power of Google Cloud, has realized a breakthrough storage solution that delivers superior SSD performance at a cost comparable to traditional HDDs. This opens up a huge opportunity for businesses, especially those processing large amounts of data and requiring fast access speeds, to optimize storage costs without sacrificing performance. This innovation not only improves operational efficiency but also promotes a strong digital transformation in many fields.<\/p>\n\n\t\t<\/div>\n\t<\/div>\n<div class=\"templatera_shortcode\"><div class=\"vc_row wpb_row vc_row-fluid\"><div class=\"wpb_column vc_column_container vc_col-sm-12\"><div class=\"vc_column-inner\"><div class=\"wpb_wrapper\"><div class=\"vc_message_box vc_message_box-standard vc_message_box-rounded vc_color-blue\" ><div class=\"vc_message_box-icon\"><i class=\"vc-mono vc-mono-technorati\"><\/i><\/div><p><a href=\"https:\/\/gcloudvn.com\/en\/main-logo-1\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-664\" src=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2021\/06\/main-logo-1.png\" alt=\"\" width=\"221\" height=\"72\" srcset=\"https:\/\/gcloudvn.com\/wp-content\/uploads\/2021\/06\/main-logo-1.png 214w, https:\/\/gcloudvn.com\/wp-content\/uploads\/2021\/06\/main-logo-1-18x6.png 18w, https:\/\/gcloudvn.com\/wp-content\/uploads\/2021\/06\/main-logo-1-183x60.png 183w\" sizes=\"auto, (max-width: 221px) 100vw, 221px\" \/><\/a>As a senior partner of Google in Vietnam, Gimasys has more than 10+ years of experience, consulting on implementing digital transformation for 2000+ domestic corporations. Some typical customers Jetstar, Dien Quan Media, Heineken, Jollibee, Vietnam Airline, HSC, SSI...<\/p>\n<p>Gimasys is currently a strategic partner of many major technology companies in the world such as Salesforce, Oracle Netsuite, Tableau, Mulesoft.<\/p>\n<p>Contact Gimasys - Google Cloud Premier Partner for advice on strategic solutions suitable to the specific needs of your business:<\/p>\n<ul>\n<li>Email: gcp@gimasys.com<\/li>\n<li>Hotline: 0974 417 099<\/li>\n<\/ul>\n<\/div><\/div><\/div><\/div><\/div><\/div><\/div><\/div><\/div><\/div>\n<\/section>","protected":false},"excerpt":{"rendered":"T\u1eeb YouTube v\u00e0 Gmail \u0111\u1ebfn BigQuery v\u00e0 Cloud Storage, g\u1ea7n nh\u01b0 t\u1ea5t c\u1ea3 c\u00e1c s\u1ea3n ph\u1ea9m c\u1ee7a Google \u0111\u1ec1u ph\u1ee5 thu\u1ed9c v\u00e0o Colossus - m\u1ed9t h\u1ec7 th\u1ed1ng l\u01b0u tr\u1eef ph\u00e2n t\u00e1n n\u1ec1n t\u1ea3ng h\u00e0ng \u0111\u1ea7u. L\u00e0 n\u1ec1n t\u1ea3ng l\u01b0u tr\u1eef&hellip;","protected":false},"author":2,"featured_media":22226,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1,135],"tags":[],"class_list":["post-22238","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-kienthuc","category-google-cloud-platform","entry","has-media"],"_links":{"self":[{"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/posts\/22238","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/comments?post=22238"}],"version-history":[{"count":0,"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/posts\/22238\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/media\/22226"}],"wp:attachment":[{"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/media?parent=22238"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/categories?post=22238"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gcloudvn.com\/en\/wp-json\/wp\/v2\/tags?post=22238"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}