Cutting BigQuery Costs by 20% While Freeing Weeks of Engineering Time
Kristof Horvath
8 min read

In this post, we’re walking you through how a fast-growing retail AI SaaS company tackled unpredictable BigQuery costs with Rabbit: what the problem actually looked like at the workload level, how the company regained control over their GCP costs, and what the outcome was in financial and engineering terms.
For data-intensive SaaS companies running analytics at scale, BigQuery costs have a way of quietly becoming one of the largest line items in the cloud bill. The billing data arrives weeks late, granularity is poor, and by the time an anomaly surfaces, the spend is already locked in. This company’s experience is a useful case study in what it actually takes to go from reactive firefighting to continuous cost governance.
About the company in this BigQuery case study
The company in this case study is one of the fastest-growing retail and CPG AI SaaS businesses, delivering analytics platforms that help global retailers and supply chain leaders optimize inventory, merchandising, and pricing decisions in real time. Their award-winning products rely heavily on Google Cloud to process large-scale datasets and serve insights to enterprise customers.
As customer demand and data volumes accelerated, so did cloud bills. BigQuery alone accounted for nearly half of this company’s monthly GCP spend. What had once been manageable quickly became a barrier to scale: costs were unpredictable, visibility was limited, and engineering hours were drained by manual analysis.
“What I like about Rabbit is the wealth of insights it provides. These insights are not only accessible but also customizable, allowing us to act across major cost centers. Without Rabbit, I’d have to write long queries to understand the same facts. Now it’s immediate, and that has made a significant impact.”
— Engineering Leader, retail AI SaaS company
The challenge: BigQuery costs without visibility
For this customer, scale came with a hidden price tag. As their SaaS platforms powered more retail and supply chain decisions, BigQuery became the analytical backbone, running everything from real-time sales dashboards to predictive inventory models. The workloads scaled seamlessly, but the costs behind them did not.
On the surface, BigQuery’s On-Demand pricing promised elasticity: queries executed instantly, pipelines scaled without friction. In practice, this elasticity translated into unpredictable monthly bills. Costs only became visible weeks later in GCP billing exports, long after the queries had run and budgets were already overshot.
The core issue was the absence of cost observability at the workload level. Engineering teams had no way to:
- Baseline workloads or forecast query spend before execution.
- Distinguish exploratory analysis from production pipelines, even though they carried very different business value.
- Attribute costs at the job level, leaving finance and engineering equally blind to which workloads were driving spend.
Without query-level attribution or pre-execution cost estimation, the team relied on coarse-grained billing data that was better suited for accountants than for engineers. By the time anomalies surfaced, the spend was already locked in.
Attempts to regain control came at the expense of productivity. Engineers spent entire days writing and maintaining custom SQL queries against INFORMATION_SCHEMA just to understand cost drivers. These investigations were slow, repetitive, and incomplete — they couldn’t capture the full picture of slot-hour utilization or storage tier inefficiencies.
Learn more about effective slot hour costs:
What Does a BigQuery Job Actually Cost on a Reservation?
The downstream effect was financial and cultural:
- BigQuery dominated as the primary cost center, consuming nearly half of total GCP spend.
- Unclear pricing models meant that two workloads delivering the same business value could result in dramatically different cost profiles, with no reliable way to forecast them in advance.
- Engineering velocity suffered, with developers pulled into cost firefighting instead of building SaaS features for their retail and CPG customers.
Instead of analytics that scaled predictably with business growth, the team found itself trapped in a cycle of reactive cost governance: always catching up, never ahead.
The solution: Query-level observability and automated cost governance
The company needed more than dashboards: they needed granular, job-level visibility and automated levers for cost control. Rabbit delivered both, embedding cost intelligence directly into the engineering workflow without requiring process overhauls or long onboarding.
The shift was immediate. Rabbit plugged into existing pipelines and began surfacing insights that were previously invisible:
- Query-level cost attribution tied every BigQuery job to its financial footprint, eliminating the need for engineers to build and maintain custom SQL for spend analysis.
- Workload segmentation differentiated production pipelines from ad-hoc experimentation, exposing where reserved capacity was misaligned and where On-Demand execution was driving unnecessary spend.
- Temporal usage profiling highlighted concurrency peaks, idle slots, and time-based spend spikes, enabling smarter workload scheduling and reservation alignment.
Granular cost observability driving transparency
Before Rabbit, engineers relied on high-level GCP billing reports that provided broad trends but lacked the granularity to distinguish between production pipelines and ad-hoc exploration. Whenever they needed answers, they wrote custom queries — a time-consuming task that delayed decisions and often missed optimization opportunities.
Rabbit introduced transparency at a level that was previously unattainable:
- Query-level spend insights showed the true cost of every BigQuery job, instantly and without SQL.
- Separation of ad-hoc queries from production pipelines revealed where commitments were mismatched.
- Time-based usage patterns exposed exactly when workloads caused spend to spike and when resources were underutilized.
For the first time, the team could connect workloads directly to costs and base cloud budgeting decisions on real, timely data rather than assumptions.
Optimized engineering operations
On top of visibility, Rabbit introduced real-time anomaly detection. Instead of discovering overspend weeks later in billing exports, engineers were alerted instantly when queries deviated from baseline. This turned firefighting into continuous cost governance, where teams could intervene before inefficiencies compounded.
- Anomalies were flagged in near real time, ensuring rapid detection of unusual spending.
- Dashboards were ready to use, replacing homegrown solutions that required constant upkeep.
- Customizable insights allowed teams to tune Rabbit’s recommendations to match their own cost management priorities.
This freed engineers from low-value cost investigations and allowed them to focus on building the SaaS products their customers actually used.
The results: lower costs, higher engineering efficiency
When the company adopted Rabbit, the change was visible almost immediately. What had once been a reactive struggle with billing exports became a clear, real-time view of how every query and workload affected spend. This shift alone gave the team confidence that costs were finally measurable and manageable, not just numbers arriving weeks later in invoices.
From there, the financial impact followed:
- BigQuery spend dropped by 15-20% through query-level attribution and optimized pricing model selection. By routing workloads intelligently between On-Demand and Reservation models, compute resources were used efficiently — cutting nearly one-fifth of BigQuery’s budget without reducing throughput or agility.
- Close to 50% of overall GCP spend was brought under Rabbit’s management, beginning with the largest cost center but expanding across the stack.
- Cloud Storage (GCS) lifecycle policies became systematic rather than ad hoc, with cold data automatically archived to lower-cost tiers, further reducing long-term costs while keeping critical datasets accessible.
- BigQuery: Query-level insights and smart pricing choices delivered 15-20% cost reduction across workloads.
- Cloud Storage (GCS): Rarely accessed data was identified and systematically archived into lower-cost tiers.
- Other services: Recommendations for Kubernetes, Compute Engine, Cloud Run, and Cloud SQL provided further efficiency opportunities.
For engineers, the change was cultural as much as financial. Before Rabbit, weeks were lost writing and maintaining SQL queries to track down cost anomalies. After adoption, that work was automated and streamlined, saving the equivalent of a full FTE each year. Instead of firefighting bills, the team focused on building their SaaS products, knowing that cost governance was embedded directly into their workflows.
Rabbit didn’t just cut costs — it embedded predictability, automation, and workload-level cost intelligence into daily practice. The team could keep scaling their analytics platform without the constant worry of unpredictable BigQuery bills.
Key optimization results with Rabbit
- 15-20% savings on BigQuery workloads through job-level insights and optimized pricing model selection — cutting nearly one-fifth of BigQuery spend.
- Close to 50% of overall GCP spend brought under management with Rabbit, with BigQuery as the primary cost center.
- Cloud Storage (GCS) savings achieved by archiving rarely accessed data into lower-cost tiers — turning unused storage into immediate cost efficiency.
- Significant engineering time saved by removing the need to build and maintain SQL queries for cost analysis — reclaiming weeks of engineering capacity every year, equivalent to a full-time engineering role.
If you’re managing BigQuery at scale and costs are growing faster than your workloads, see how Rabbit helps data teams get query-level visibility and control, or book a demo right away.
Read our case studies:
Find out how other companies benefit from using Rabbit

