How to Configure BigQuery Baseline Slots and Slot Commitments
Kristof Horvath
10 min read

This post explains what baseline slots and commitments actually do in BigQuery, why each is easy to misconfigure, and how to set both correctly.
A team switches to BigQuery reservations, does the pricing math, picks Enterprise edition, and proceeds to configure their reservation. A few months later, their bill is lower than before — but not nearly as low as the analysis suggested. The culprit is almost always the same: baseline slots set too high, commitments locked in before usage patterns were stable enough to justify them, or both. These two settings are where most reservation waste originates, and they’re rarely obvious from the configuration UI alone.
Learn all about BigQuery Editions and Reservations. Download our white paper:
What are baseline slots in a BigQuery reservation?
Baseline slots define the minimum guaranteed compute capacity that is always allocated to a reservation, whether or not any queries are running.
A few things that are important to understand about how they’re billed:
- Baseline slots are allocated every second the reservation exists
- You are billed for them even when no queries are running: nights, weekends, maintenance windows
- Autoscaling does not affect them: the autoscaler scales up above the baseline, but baseline capacity is always present regardless
This is what makes baseline the highest-risk setting in a reservation configuration. Unlike autoscaled capacity, which scales down when not needed (with a minimum 60-second billing window), baseline is permanent. Set it based on peak demand, and you are paying for peak capacity around the clock.
Why do most teams set BigQuery baseline slots too high?
The instinct when provisioning compute capacity is to provision for peak demand. For infrastructure like servers or Kubernetes nodes, this logic is sound: if you need capacity during a traffic spike, you need it available in advance. Baseline slots invite the same instinct — and the same mistake.
Consider the following example. A data team runs scheduled pipelines that execute for 6 hours per day, processing large volumes of data at high parallelism. During those windows, they need 5,000 slots. Outside those windows, the reservation sits idle. If they set baseline to 5,000 slots:
- For the 6 hours per day when pipelines are running: the baseline is justified
- For the remaining 18 hours per day: they’re paying for 5,000 slots consuming nothing
- At $0.06/slot-hour (Enterprise): 5,000 slots × 18 hours × $0.06 = $5,400/day in idle baseline spend
At scale, this is where the average 30-40% reservation waste comes from. It’s not a theoretical estimate: it reflects what we see consistently in reservation configurations that haven’t been deliberately sized.
The problem isn’t that baseline is a bad feature. It’s that baseline should reflect your true minimum demand: the floor below which your usage never drops during normal operations. Peak demand is the autoscaler’s job.
How should you set BigQuery baseline slots?
The practical rule: baseline should reflect your steady-state minimum, not your peak. For most workloads, the right baseline is lower than teams expect — often zero. Here’s how to think through it:
If your workload is batch-heavy or pipeline-driven: pipelines run on a schedule, then stop. During the off-hours, the reservation genuinely needs no capacity. Set baseline to 0 or a very small value (enough for lightweight monitoring queries, if applicable), and let the autoscaler handle the execution windows. You pay for autoscaled capacity only when it’s being used — though note the 60-second minimum billing window per scale event, which our autoscaling post covers in detail:
Learn more:
BigQuery Reservations: How Does Autoscaling Really Work?
If your workload includes always-on services: continuous ingestion pipelines, real-time dashboards, or any query stream that runs 24/7 genuinely benefits from a non-zero baseline. Set it to reflect the minimum slot consumption during your lowest-traffic period — not the average, and not the peak.
To find the right number, use historical slot usage data from INFORMATION_SCHEMA.JOBS_BY_PROJECT. Our BigQuery Reservation Planner includes a query (slot_usage_statistic_for_max_slot_setting.sql) that surfaces per-second slot usage statistics (including P50, P90, P95, and P99) over a 30-day window. A sensible baseline for most workloads is at or below the P10 or P25 of your actual slot consumption during active hours. If your P10 is 0, baseline should be 0.
The broader principle: start conservative. A baseline of 0 with a well-configured autoscaler is almost always more cost-efficient than a baseline set to “safe.” You can increase baseline if monitoring reveals consistent queuing during peak windows. But that’s a data-driven adjustment, not an upfront hedge.
What are BigQuery slot commitments, and when do they make sense?
Slot commitments are a pricing mechanism that lets you trade flexibility for a lower rate on your reservation capacity. In exchange for committing to a minimum number of slots for 1 or 3 years, Google applies a discount:
- 1-year slot commitment: 20% discount off the pay-as-you-go slot-hour rate
- 3-year slot commitment: 40% discount
Slot commitments are available on Enterprise and Enterprise Plus editions only. They do not apply to Standard.
Learn more about BigQuery Editions:
BigQuery Editions Comparison: Standard vs Enterprise vs Enterprise Plus
The important detail: committed slots behave like a paid baseline. They are always billed, whether or not they are being used. The same dynamics that make an oversized baseline expensive apply equally to commitments, except that with a commitment, you’ve locked in that cost for a year or three years, at a discount rate.
This means commitments are only appropriate when your usage is stable and predictable at the level you’re committing to. Committing to 1,000 slots because you sometimes peak at 1,000 is the wrong approach: you’ll pay for 1,000 slots around the clock for a year. Committing to 200 slots because that’s your genuine floor (the capacity you’re actually using continuously, 24/7) can reduce your effective slot-hour cost substantially.
Learn more:
What Does a BigQuery Job Actually Cost on a Reservation?
A useful framing: commit to the floor, let the autoscaler handle the rest. Commitments should reflect your minimum steady-state usage, just as baseline should.
What is the difference between slot commitments and spend-based commitments?
BigQuery offers two types of commitments with meaningfully different billing mechanics:
Slot commitments are what most teams think of when they consider BigQuery capacity discounts. They apply to capacity-based pricing specifically, are billed per second, and are available on Enterprise and Enterprise Plus only.
Spend-based commitments work differently. They apply at the billing level across eligible SKUs (including BigQuery reservations, Cloud Composer 3, and Dataplex) within the same GCP project. They’re billed on an hourly average rather than per second. Discount terms are lower: 10% for 1-year, 20% for 3-year.
The distinction matters because of how workload timing interacts with billing granularity.
Slot commitments bill per second. If your pipeline runs for the first 10 minutes of every hour and sits idle for the remaining 50, a slot commitment charges you for all 60 minutes of capacity, even though you used 10. At low utilization within each billing cycle, you’re effectively paying for idle time at a discounted rate, which can still be more expensive than the alternative.
Spend-based commitments bill on the hourly average. In that same scenario (10 active minutes per hour) a spend-based commitment would bill you based on roughly 17% of your peak slot consumption, because the hourly average reflects the short usage window. For bursty, sub-hourly workloads, spend-based commitments often produce a lower effective rate despite having a smaller discount percentage.
When should you use BigQuery slot commitments vs spend-based commitments? Here’s some practical guidance:
- Use slot commitments when usage is stable at the per-second level (for example, continuous ingestion pipelines, always-on dashboard queries, high-concurrency workloads that run throughout the day)
- Use spend-based commitments when workloads are bursty within each hour but consistent across hours (for example, a pipeline that fires every hour but only runs for a few minutes each time)
- In both cases, commit only to the capacity level you’re confident is your true minimum over the full commitment term
- Don’t commit based on a single week of data. Run at least 30 days of data through the analysis before locking in
One additional point: the higher discount rate on slot commitments (20%/40% vs 10%/20%) makes them more attractive when the usage pattern justifies it. If you can reliably fill committed capacity at the per-second level, slot commitments deliver meaningfully more savings. Spend-based commitments are the right fallback for everything else.
How does Rabbit help with baseline and commitment configuration?
The challenge with baseline and commitment sizing is that it requires analysis of historical usage at high granularity (per-second slot consumption over weeks or months) before any configuration decision is made. This is why most teams get it wrong: they configure first and analyze later (if at all).
Rabbit approaches this the other way around. Before recommending any reservation configuration, Rabbit’s Reservation Planner analyzes your actual INFORMATION_SCHEMA data to determine:
- Which projects have usage patterns that justify reservations (and which are cheaper on on-demand)
- What the optimal max slot setting is for each reservation, based on P90/P95/P99 historical usage
- What baseline level is justified by your true continuous minimum demand
The recommendations include a 30% overhead buffer to account for autoscaler billing windows and transient usage spikes, so the numbers reflect what your reservation will actually cost, not an idealized scenario.
On commitments, Rabbit surfaces opportunities once usage patterns have been stable for long enough to justify locking in: it identifies the slot capacity you’re consistently consuming and flags when a 1-year or 3-year commitment would produce meaningful savings relative to pay-as-you-go pricing, including the comparison between slot commitments and spend-based commitments for your specific usage profile.
Lufthansa Group used this approach to cut their BigQuery spend by 52%, combining an initial reservation setup based on Rabbit’s Planner analysis with ongoing autoscaler optimization. Their experience is described in the Lufthansa Group case study.
Baseline slots and commitments are where most of the savings opportunity lives in a BigQuery reservation setup — and where most of the waste comes from when they’re misconfigured. Getting both right requires sizing against your actual usage patterns before committing to any configuration.
Our white paper walks through the complete setup process: how to analyze your projects before switching, how to create and assign reservations, and how to tune the full configuration over time as usage evolves.
Download the white paper: How To Get Started With BigQuery Editions and Reservations
This is Part 3 of a series supporting the white paper. If you’re still deciding whether reservations make sense for your workloads, start with Part 1: Comparing BigQuery Pricing Models: On-demand vs Capacity-based Reservations. For the BigQuery edition choice (which determines whether baseline slots and commitments are available to you at all) see Part 2: BigQuery Editions Comparison: Standard vs Enterprise vs Enterprise Plus. For a deeper look at how autoscaling interacts with baseline and max slots, see Part 4: BigQuery Reservations: How Does Autoscaling Really Work?.


