Share

This blog post is the third part in a series around containerizing SAP S/4HANA systems. I encourage you to read Part 1: Containerizing SAP S/4HANA Systems with Docker and Part 2: Deploying SAP S/4HANA Containers with Kubernetes. In the previous posts, we discussed what Docker is and how we can use it in conjunction with SAP S/4HANA to solve problems related to the N + M landscape. In part 2 we discussed how we can use YAML files to rapidly deploy multiple Kubernetes pods running an image of our S/4HANA system.

Since publishing our first blog post on this topic we have successfully implemented this infrastructure across multiple customer landscapes. Most recently, my colleagues and I wrapped up another project with an innovative customer in the CPG industry, where we successfully deployed additional application servers (AAS) in their QA environment using Kubernetes as a PoC. My colleague Maxim Shmyrev and I have been working hard to make this project successful and we are excited about the potential value it brings to our customers.

What Was the Challenge?

The customer reached out to us and explained that they utilize Infrastructure as Code (IaC) to deploy and manage most of their IT infrastructure. They wanted to rapidly spin up multiple additional application systems for their QA system during periods of high demand.

This is a common scenario for many SAP customers: system usage increases, and Basis teams respond by provisioning additional, costly VMs. They then install the AAS using the Software Provisioning Manager (SWPM) and perform the necessary configurations. Meanwhile, hours pass while the QA backlog continues to grow. By the time the additional capacity is available, project timelines have already been impacted. Once demand increases, additional time must be spent decommissioning these expensive VMs.

Our customer wanted to explore whether deploying their AAS on Azure Kubernetes Services (AKS), instead of provisioning multiple VMs, could provide a more efficient and cost-effective solution.

Our Approach

We created a Docker image of our customer’s additional application server for their QA system using Docker and the Software Provisioning Manager (SWPM. When configuring the YAML files to deploy the pods in AKS we added some additional automation steps at the OS level of the container to ensure a seamless connection to the database and PAS during start-up. During some initial testing we easily deployed 10 additional app servers over two nodes in AKS. Each node has approximately 251 GB of memory allocated and is only using ~35.2 GB (14%) of memory to run these 10 app servers. It should be noted that AKS requires a minimum of two nodes, however, the workload itself could run on a single node if required.

During our cost analysis, we compared the Kubernetes-based deployment against a traditional VM-based approach. A standard E32as v5 virtual machine (32 vCPUs, 256 GiB memory) costs approximately $800 per month. Running 10 additional application servers using traditional VMs would therefore cost approximately $8,000 per month.

During our proof of concept, the total monthly cost for running 10 application servers on Kubernetes was approximately $3,000, including nodes, storage, and supporting infrastructure. The compute nodes themselves accounted for approximately $700 of this total cost.

While this estimate includes additional resources used during the proof of concept, and actual costs may vary depending on configuration and usage, the results indicate potential cost savings in the range of 40–60% when deploying additional application servers on Kubernetes.

The Result

This approach enables our customers to deploy multiple additional application servers within seconds, compared to the best-case scenario of approximately four hours required for manual AAS installation. Furthermore, the additional app servers running in Kubernetes pods can instantly be deleted and the nodes can be scaled down or retired when they are no longer needed. This flexibility allows our customers to remain agile, optimize infrastructure costs, and quickly respond to changing workload demands. If you enjoyed this blog post, please stay tuned for other topics my team members and I will be covering next and be sure to leave a comment below if you have any questions!

 

 This blog post is the third part in a series around containerizing SAP S/4HANA systems. I encourage you to read Part 1: Containerizing SAP S/4HANA Systems with Docker and Part 2: Deploying SAP S/4HANA Containers with Kubernetes. In the previous posts, we discussed what Docker is and how we can use it in conjunction with SAP S/4HANA to solve problems related to the N + M landscape. In part 2 we discussed how we can use YAML files to rapidly deploy multiple Kubernetes pods running an image of our S/4HANA system.Since publishing our first blog post on this topic we have successfully implemented this infrastructure across multiple customer landscapes. Most recently, my colleagues and I wrapped up another project with an innovative customer in the CPG industry, where we successfully deployed additional application servers (AAS) in their QA environment using Kubernetes as a PoC. My colleague Maxim Shmyrev and I have been working hard to make this project successful and we are excited about the potential value it brings to our customers.What Was the Challenge?The customer reached out to us and explained that they utilize Infrastructure as Code (IaC) to deploy and manage most of their IT infrastructure. They wanted to rapidly spin up multiple additional application systems for their QA system during periods of high demand.This is a common scenario for many SAP customers: system usage increases, and Basis teams respond by provisioning additional, costly VMs. They then install the AAS using the Software Provisioning Manager (SWPM) and perform the necessary configurations. Meanwhile, hours pass while the QA backlog continues to grow. By the time the additional capacity is available, project timelines have already been impacted. Once demand increases, additional time must be spent decommissioning these expensive VMs.Our customer wanted to explore whether deploying their AAS on Azure Kubernetes Services (AKS), instead of provisioning multiple VMs, could provide a more efficient and cost-effective solution.Our ApproachWe created a Docker image of our customer’s additional application server for their QA system using Docker and the Software Provisioning Manager (SWPM. When configuring the YAML files to deploy the pods in AKS we added some additional automation steps at the OS level of the container to ensure a seamless connection to the database and PAS during start-up. During some initial testing we easily deployed 10 additional app servers over two nodes in AKS. Each node has approximately 251 GB of memory allocated and is only using ~35.2 GB (14%) of memory to run these 10 app servers. It should be noted that AKS requires a minimum of two nodes, however, the workload itself could run on a single node if required.During our cost analysis, we compared the Kubernetes-based deployment against a traditional VM-based approach. A standard E32as v5 virtual machine (32 vCPUs, 256 GiB memory) costs approximately $800 per month. Running 10 additional application servers using traditional VMs would therefore cost approximately $8,000 per month.During our proof of concept, the total monthly cost for running 10 application servers on Kubernetes was approximately $3,000, including nodes, storage, and supporting infrastructure. The compute nodes themselves accounted for approximately $700 of this total cost.While this estimate includes additional resources used during the proof of concept, and actual costs may vary depending on configuration and usage, the results indicate potential cost savings in the range of 40–60% when deploying additional application servers on Kubernetes.The ResultThis approach enables our customers to deploy multiple additional application servers within seconds, compared to the best-case scenario of approximately four hours required for manual AAS installation. Furthermore, the additional app servers running in Kubernetes pods can instantly be deleted and the nodes can be scaled down or retired when they are no longer needed. This flexibility allows our customers to remain agile, optimize infrastructure costs, and quickly respond to changing workload demands. If you enjoyed this blog post, please stay tuned for other topics my team members and I will be covering next and be sure to leave a comment below if you have any questions! Read More Technology Blog Posts by SAP articles 

#SAPCHANNEL

By ali

Leave a Reply