More articles
Want to bridge the cloud cost transparency gap between Management and Engineering?
Get in touch with us, we're here to help.

Zoltán Guth
CTO

Balázs Molnár
CEO
CTO
CEO
Rabbit Team
7 min read
75% of companies overspend their cloud budgets. What starts as an exciting migration often ends with finance teams and engineers at odds over escalating Google Cloud Platform (GCP) expenses.
We’ve compiled strategies that have helped organizations like Bitrise, ResearchGate, and many other Rabbit customers transform their approach to Google Cloud spending—typically saving 30% and allowing engineering teams to focus on innovation instead of cost-cutting.
Behind these statistics lies a deeper challenge: the disconnect between cloud’s promise and its operational reality. Let’s examine the transparency gap that creates friction between teams and how bridging it transforms cloud cost management.
“We migrated to GCP expecting to cut costs, but three years in, we’re spending more than ever - and I can’t even tell you exactly where the money’s going.”
For GCP users, the platform’s rich feature set creates a dilemma. The very tools that empower your team to build innovative solutions also introduce layers of pricing complexity that aren’t obvious until the bill arrives.
When we started working with a mid-sized fintech company, they had been paying for hundreds of unused disks and idle virtual machines for nearly 18 months. Nobody noticed because the costs were buried in aggregate billing data.
This gap manifests in four critical ways:
One engineering leader put it bluntly: “We only started caring about cloud costs when our CEO showed up at our stand-up with the monthly bill.”
What’s needed is a solution built specifically for Google Cloud that bridges the gap between engineering and finance by providing:
Cost optimization works best when it’s not forced on engineering teams. When engineers can actually see where cloud money is going, they often find ways to cut waste without slowing down development.
It’s not about restricting innovation - it’s about giving teams the information they need to make smarter technical decisions that happen to cost less too.
For businesses with 50-400 employees spending between $20,000-$400,000 monthly on GCP without deep expertise, common pain points include:
One customer discovered $5,000 in monthly savings on their first day simply by identifying idle Compute Engine instances.
One client reduced their BigQuery compute spending from $100,000 to $66,000 monthly using Rabbit’s BigQuery Autoscaler, with zero manual intervention.
For organizations with significant data operations, BigQuery often represents one of the largest components of GCP spending. The platform’s analytics capabilities come with a pricing structure that requires specialized optimization strategies.
Let’s examine how data teams can tackle these unique challenges to reduce costs without sacrificing performance or insights.
BigQuery’s pricing includes:
It offers two fundamentally different pricing models:
A marketing analytics firm discovered they had been running a costly report-generating query hourly for months. By adjusting the frequency to run just once per week, they reduced costs by 98.5%, saving over $5,000 monthly on a single query.
One customer restructured a mission-critical data pipeline, reducing processing costs by 42% without changing output or performance.
A media company discovered thousands of old tables, freeing up over 100TB of storage and saving approximately $5,000 monthly.
These BigQuery optimization strategies work well for established GCP users, but organizations during migration face different challenges.
Moving workloads to GCP introduces unique cost considerations that require careful planning. As one customer discovered, “Our first month after migrating to GCP, our bill was triple what we expected.”
“Our first month after migrating to GCP, our bill was triple what we expected.”
Migration presents unique challenges from two common approaches:
A healthcare organization built a detailed cost model showing both immediate post-migration costs and optimized long-term expenditures, securing executive buy-in.
A financial services company optimized each phase as it completed rather than waiting for the entire migration to finish.
A retail customer analyzed freshly migrated GKE environments, discovering they could reduce costs by 42% by adjusting resource allocations and implementing spot instances for development environments.
Companies that integrate cost awareness throughout their migration process typically achieve 25-40% lower overall migration costs.
While migration presents its own cost challenges, enterprise organizations face these issues at significantly larger scale and complexity.
As cloud environments grow, siloed teams, complex data pipelines, and sophisticated chargeback requirements create additional layers of cost management challenges that require more comprehensive solutions.
Cloud cost management starts with visibility, continues with smart optimizations, and becomes most effective when teams have the right context to act. For smaller and mid-sized companies, solving inefficiencies and automating routine tasks can already lead to significant savings.
But these challenges grow with scale. In the next part of this series, we’ll focus on what cost optimization looks like in complex enterprise environments—covering cross-team transparency, chargeback models, and advanced optimization strategies for large-scale GCP use.
Learn how to track your GKE cluster costs effectively. Get practical insights into Kubernetes cost monitoring and attribution.
Read moreZoltán Guth, CTO of Rabbit shares the team’s findings and experiences with our customers on how to identify cost-saving opportunities at the service level.
Read moreCTO
CEO