Actionable Data Analytics
Join Our Level Up Your Data Email List for Data News Sent to Your Inbox

How to Plan Microsoft Fabric Capacity

Posted on

One of the biggest challenges companies face when adopting Microsoft Fabric is planning for capacity and converting the allocated resources into actual costs. Understanding Microsoft Fabric capacity is crucial when it’s time to assign an SKU for your Fabric workloads (F-SKU).

If you overestimate the required capacity, you’ll be overpaying, and if you don’t have enough capacity, your workloads may not be stable.

In this guide, I’ll break down the core concepts you need to know, including billing, common pitfalls, and the tools you can use to estimate your needs.

Microsoft Fabric is billed primarily in terms of compute capacity units (CUs). Each F-SKU gives you a fixed number of capacity units per second. For example:

  • F2 → 2 CUs per second
  • F4 → 4 CUs per second
  • F8 → 8 CUs per second

The higher the SKU, the more capacity units you receive, but the cost grows exponentially. Without careful planning, you could:

  • Overpay for unused units (if workloads don’t consume them fully).
  • Run out of capacity (if workloads exceed available units and the platform throttles or stops processing).
  • Miss budget forecasts due to hidden costs from storage, networking, and additional Power BI licenses.

That’s why planning Microsoft Fabric capacity upfront is essential.

What Does Microsoft Fabric Capacity Include?

When you purchase an F-SKU, you’re buying compute power. You will get the following components:

  • Compute (main) – It is measured in capacity units for processing, sending, asking questions, and analyzing numbers. 
  • Storage – OneLake storage costs are not included in the F-SKU price and are billed separately.
  • Networking & extras – Bandwidth, data egress, and features like Power BI licensing are separate considerations.

The key point to remember with an F-SKU is that you must account for the growth of your storage and the need for reports. 

Crucial Rule of Capacity Units

An important rule when using Microsoft Fabric is that unused capacity units expire after just one hour; you can’t bank them for later.

On the other hand, overconsuming capacity can stall workloads. If you exceed your daily CU allocation (including borrowing from future hours), workloads may stop until usage resets.

This means capacity planning isn’t just about how much you buy, but also about how consistently you consume it.

The Three Core Drivers of Fabric Cost

From real-world experience, ongoing costs in Microsoft Fabric come down to three key factors:

Assets

  • Tables, files, or source systems. 
  • Example: 100 tables across SQL Server, Oracle, and JSON files.

Data

  • Daily incremental loads matter more than initial full loads.
  • Example: 2–3 GB of compressed data per day drives ongoing costs.

Frequency

  • Loading 100 tables once per day is cheaper than reloading them every hour.
  • Higher refresh rates = more CU consumption.

Estimating Microsoft Fabric Capacity

Microsoft offers two tools to help with planning: Fabric Capacity Estimator and Fabric Capacity Planning Workbook.

Fabric Capacity Estimator

The Microsoft Fabric Capacity Estimator is a web-based tool that helps you predict the capacity needed and optimize costs based on planned workloads. Recently released in preview, the tool allows you to:

  • Enter total compressed data size (after ingestion into OneLake).
  • Define refresh frequency.
  • Specify the number of tables and workloads (Data Factory, Spark, Data Warehouse, Power BI, etc.).
  • Estimate daily Power BI users and report creators.

From a usability standpoint, it is simple, fast, and easy to operate; anyone can use it. It is even a good starting point for cost discussions. However, it can be too high-level for enterprise scenarios. It also has limited input parameters (no detail on source systems, compression, or concurrency), and it does not reflect real-life performance.

I suggest using this tool for budgetary estimates, rather than for production planning.

Fabric Capacity Planning Workbook

Before the web tool, Microsoft released an Excel workbook that’s still in circulation. It offers far more detail, such as:

  • Define daily load increments (e.g., 2 GB/day).
  • Split workloads by source system type (SQL, Oracle, JSON, S3).
  • Adjust for compression rates and ingestion speeds.
  • Estimate concurrent queries and semantic model refreshes.
  • Factor in business calendar (e.g., 5 vs. 7 working days).

When considering the estimation of the resources needed for a web application, the command-line tool is far more granular than the web estimator. It allows scenario testing, such as the ability to test peak versus average workloads, something that is useful for enterprise environments. However, this tool requires more technical knowledge to configure and is not officially maintained. Thus, parameters may fall behind Fabric updates.

Comparing Estimators

Let’s say you want to load 100 tables (100 GB compressed) once per day, with 30 Power BI viewers and five report creators.

  • A web-based tool recommends F4 capacity, storage, and Power BI Pro licenses when planning to load 100 tables that are compressed to 100 GB in size. 
  •  From the Excel Workbook output, it might suggest either F8 or F16, depending on the source of the data, the size of the incremental loads, and the amount of Spark usage.

The difference shows why relying only on the web estimator can lead you to underestimate real costs.

Cost Optimization Tips for Microsoft Fabric Capacity

Keep these best practices in mind to avoid overpaying for Microsoft Fabric Capacity. 

  • Don’t refresh hourly if daily is enough.
  • Avoid costly Dataflow Gen2; use stored procedures or Spark notebooks instead.
  • Separate compute and storage planning. Remember, OneLake storage billing is different.
  • Account for Power BI licensing. Each Pro license is around $15 per month.
  • Run proof-of-concept tests to see if your estimates hold up. Estimators are a starting point, not a replacement for workload testing.

Moving from Estimates to Proof of Concept

The next critical step is to run a Proof of Concept (PoC). It doesn’t have to be a massive project; just two to six weeks is enough. The goal of the PoC isn’t just to validate costs but also to address questions, such as:

  • Is Fabric the right platform for your organization?
  • Do you possess the necessary skills to manage and develop it?
  • What architecture makes sense for your case (two-tier warehouse, medallion, lakehouse)?
  • Do your initial cost estimates hold up under real workloads?

Why You Should Always Start with a Proof of Concept

When it comes to planning your Proof of Concept, 90% of your resources will be consumed by background operations, like moving and changing data. Interactive activities, such as running reports, will only use about 10% at most. That’s why your PoC should focus heavily on testing data movement and transformation scenarios.

Well-known problems in Fabric PoCs are often related to the volume, frequency, and complexity of data movement, as well as the level of transformation. This is what your PoC should be looking at.

Microsoft’s Fabric Capacity Metrics App is a great way to sort out these issues. When you install it in your Power BI environment, you get a 14-day view of the capacity consumption. If you need longer-term data, you can extend it with Fabric’s unified management tool.

Performance Testing

Here are some strategies that really stand out.

  1. Isolate workloads by workspace.
  2. Name items clearly.
  3. Simulate real-world loads. 
  4. Run workloads on a schedule.
  5. Experiment with different transformation approaches. For example, warehouse stored procedures often use fewer resources than notebooks. 

Fabric Payment Models

Fabric gives you two pricing models:

  • Pay-as-you-go (PAYG): It gives you maximum flexibility. Pay only for the hours the workload is running, and scale up or down as needed.
  • Reservations: You commit to capacity units for a discounted rate (up to 41% savings). These can be shared across subscriptions, management groups, and workloads for greater efficiency.

I recommend that instead of buying one big reservation, you consider multiple smaller ones so you can cancel or adjust later without disrupting everything. E.g., two reservations of F2 instead of one F4.

Conclusion

Planning Microsoft Fabric capacity is a balancing act between performance and cost. While it is powerful, without careful planning, costs can spiral quickly. Well-known F-SKUs are not always indicative of the actual capacity that you’ll get. So, before making a move, you need to dig deeper to understand the drivers of your ongoing expenses. Using estimation tools is a great starting point, but not a final answer.

One recommended way to monitor your usage with Fabric is to set up a Focused Proof of Concept, followed by the Fabric Capacity Metrics App to measure usage and determine cost-cutting strategies. Whether you choose pay-as-you-go or reservations, approaching Fabric capacity with data-driven planning ensures that you avoid overspending while maintaining stable and scalable workloads.

Ultimately, success in Microsoft Fabric is not about buying the largest SKU, but buying the right one for your needs.

Frequently Asked Questions

Question: Can I scale Microsoft Fabric capacity up and down automatically?

Answer: Currently, Fabric doesn’t support auto-scaling like some cloud services. You can manually scale capacity units up or down, or use pay-as-you-go for more flexibility. Reservations require a commitment but offer cost savings.

Question: How do weekends and holidays impact Fabric capacity usage?

Answer: Capacity is billed continuously, regardless of whether workloads are active. If your data processing drops significantly on weekends or holidays, you may want to pause pay-as-you-go capacity to avoid waste.

Question: Can I move capacity units between regions or tenants?

Answer: No. Fabric capacities are region-specific, and you can’t move them between tenants. When planning, make sure your workloads are provisioned in the correct Azure region to avoid data residency or latency issues.

No Comments Yet.

Do you want to leave a comment?

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

I agree to these terms.