Cost Optimization in SAP Datasphere
Share

Introduction:

One of the largest cost drivers I noticed within our SAP BTP services was SAP Datasphere. This immediately made me question why the cost was so high and what options we had to optimize it. At first, I found SAP BTP’s cost model difficult to understand, it isn’t straightforward, and getting a full picture requires quite a bit of reading and exploration.

The BTP Administrator Learning Journey helped clarify the different cost models SAP uses, as well as how to analyze spending through the Cost & Usage section in the Global Account. That helped understand the cost incurring for different services in BTP. 

From my experience, many BTP services offer limited opportunities for cost reduction. However, there are services such as Datasphere, HANA Cloud Database, and SAP ABAP Environment where optimization can make a meaningful difference.

In this article, I will focus specifically on how to optimize costs for SAP Datasphere tenants.

As many of you may already know, the cost of an SAP Datasphere tenant is driven primarily by its underlying SAP HANA Cloud Database. This means your Datasphere configuration especially how it allocates and consumes Capacity Units (CUs) directly influences your overall cost.

What is Capacity Unit? 

A Capacity Unit represents a fixed amount of memory and computing resources used for a specific service in SAP BTP you allocate it as quota.

You can estimate number CUs required for your work loads based on Datasphere estimator. 

https://datasphere-estimator-sac-saceu10.cfapps.eu10.hana.ondemand.com/

Tenant Configuration

Always ensure that your Datasphere tenant is right‑sized based on the specific needs of each environment. This is one of the biggest factors that determines whether your costs go up or stay optimized. Collaborate with the relevant teams to understand the actual resource requirements.

Your sizing will largely depend on how many spaces your BI/Data teams use in Datasphere and the amount of memory and disk capacity allocated to those spaces. These requirements directly influence how you configure your tenant and, ultimately, how much you spend.

Padmanabula_0-1774385771723.png

In this example, you can see that memory and disk were allocated to certain spaces even though they were not actually being used. To avoid unnecessary cost, make sure you understand how much memory and disk each space truly requires, and then allocate resources accordingly typically adding around 50% headroom to accommodate potential growth or unexpected usage.

Keep in mind that over provisioning resources can create challenges later. While some resources, like CPU, can be reduced easily, others like Memory, Disk may require more disruptive actions. In certain cases, if you significantly over provision, you may even need to recreate the tenant to downsize properly. Allocating only what is needed plus a reasonable buffer helps you avoid these complications and keeps your costs optimized.

Padmanabula_1-1774386867899.png

When configuring your Datasphere tenant, it’s important to choose the right Performance Class based on your workload whether it is memory intensive, compute intensive, or a balance of both. After selecting the appropriate class, size your disk, memory, and compute resources to match your actual requirements.

Be aware that enabling features such as Elastic Compute Nodes will further increase your overall cost. The primary cost drivers in Datasphere are Memory, CPU, and Disk, so these are the areas where right‑sizing matters most.

You can also enable Multi‑Availability Zone, which adds resilience and high availability. The good news is that this feature is included at no additional cost.

How do you Monitor & right size? 

There are a couple of ways to monitor your Datasphere usage:

1. System Monitor:
This provides real‑time visibility into how much memory and disk resources your Datasphere tenant is currently consuming.

Padmanabula_2-1774387248423.png

 

2. HANA Cloud Database (Usage History):
If you need historical insights, you can review usage data directly in the underlying HANA Cloud Database. Keep in mind that historical data retention is limited typically you can only view data from approximately the past month.

Padmanabula_3-1774387459883.png

Here if you see Memory: Max used 76 GB, avg 58 GB

Padmanabula_4-1774387575328.png

Here if you see CPU: Max used 95% occasionally but Avg is less than 5%

Conclusion:

Using this information, you can collaborate with your BI/Data teams to review both the current and historical usage of your Datasphere tenant. This will help you make informed decisions about right‑sizing the environment, which can significantly reduce your overall costs.

Thank you for taking the time to read my blog. I’d love to hear your thoughts feel free to share any feedback!

Thanks,
Raghu

 

 

  Read More Technology Blog Posts by Members articles 

#abap

By ali

Leave a Reply