[[{“value”:”
This tutorial deploys a Python Flask application to Kyma runtime. Most tutorials lean towards Node.js or Java, so I wanted to document the Python path properly — including the gotchas that aren’t covered in the official docs.
Prerequisites
- SAP BTP account with Kyma enabled (trial is fine to follow along)
kubectlinstalled and configured with your Kyma kubeconfig- Docker Desktop installed and running
- Docker Hub account
- Python 3.11+
Project Structure
my-btp-python/
├── app.py
├── requirements.txt
├── Dockerfile
└── k8s/
├── deployment.yaml
├── service.yaml
└── apirule.yaml
The Flask App
A simple two-endpoint app — a health check and a data endpoint:
app.py
from flask import Flask, jsonify
import os
app = Flask(__name__)
@app.route("/health", methods=["GET"])
def health():
return jsonify({"status": "ok"}), 200
@app.route("/api/data", methods=["GET"])
def get_data():
return jsonify({
"message": "Hello from Kyma!",
"environment": os.getenv("ENVIRONMENT", "dev")
}), 200
if __name__ == "__main__":
port = int(os.getenv("PORT", 8080))
app.run(host="0.0.0.0", port=port)
requirements.txt
flask==3.0.3
Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
EXPOSE 8080
ENV FLASK_APP=app.py
CMD ["flask", "run", "--host=0.0.0.0", "--port=8080"]
Apple Silicon Users
Kyma clusters run on amd64 (x86) nodes. If you’re on Apple Silicon your Mac builds arm64 images by default, which won’t run on the cluster. Build explicitly for the right platform:
docker buildx build --platform linux/amd64 -t yourusername/btp-python-app:v1 --push .
Kubernetes Manifests
k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: btp-python-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: btp-python-app
template:
metadata:
labels:
app: btp-python-app
spec:
containers:
- name: btp-python-app
image: yourusername/btp-python-app:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: ENVIRONMENT
value: "kyma-trial"
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
Set imagePullPolicy: Always during development — without it Kubernetes will use a cached image on the node and ignore newly pushed changes.
k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: btp-python-app
namespace: default
spec:
selector:
app: btp-python-app
ports:
- port: 80
targetPort: 8080
APIRule v2
APIRule v1beta1 was removed from Kyma in mid-2025. If you’re following older tutorials you’ll hit errors. The current version is v2 and the syntax changed significantly.
First, get your cluster domain:
kubectl get gateway -n kyma-system kyma-gateway -o jsonpath='{.spec.servers[0].hosts[0]}'
This returns something like *.c-abc123.kyma.ondemand.com. Strip the *. prefix and use that as your base domain.
k8s/apirule.yaml
apiVersion: gateway.kyma-project.io/v2
kind: APIRule
metadata:
name: btp-python-app
namespace: default
spec:
gateway: kyma-system/kyma-gateway
hosts:
- btp-python-app.c-abc123.kyma.ondemand.com
service:
name: btp-python-app
port: 80
rules:
- path: /*
methods: ["GET", "POST"]
noAuth: true
Key changes from v1beta1:
hostis nowhosts(a list)- Requires a fully qualified domain name
accessStrategieswithhandler: noopis replaced bynoAuth: true/.*path syntax is no longer valid — use/*
Istio Sidecar Injection
APIRule v2 is Istio-based. Without sidecar injection enabled on your namespace, the APIRule will show an Error status. Enable it before deploying:
kubectl label namespace default istio-injection=enabled
Deploy
kubectl apply -f k8s/
Watch the pods — you’re looking for 2/2 in the READY column (your container plus the Istio sidecar):
kubectl get pods -w
Verify the APIRule is ready:
kubectl get apirule btp-python-app
Test
curl https://btp-python-app.c-abc123.kyma.ondemand.com/health
# {"status":"ok"}
curl https://btp-python-app.c-abc123.kyma.ondemand.com/api/data
# {"environment":"kyma-trial","message":"Hello from Kyma!"}
Troubleshooting
404 from istio-envoy — Check the VirtualService that the APIRule generates has the correct hostname:
kubectl get virtualservice
If it shows an incorrect domain, delete and recreate the APIRule:
kubectl delete apirule btp-python-app
kubectl apply -f k8s/apirule.yaml
Test the app inside the pod before blaming the networking:
kubectl exec -it $(kubectl get pod -l app=btp-python-app -o jsonpath='{.items[0].metadata.name}') -c btp-python-app -- python -c "import urllib.request; print(urllib.request.urlopen('http://localhost:8080/health').read())"
If this returns {"status":"ok"} then Flask is running fine and the problem is in the routing layer.
Trial cluster expiry — Trial Kyma clusters expire after 14 days. When they go down, kubectl commands will throw DNS errors. Recreate the cluster from BTP Cockpit and re-download the kubeconfig. Keep your manifests in source control so you can redeploy quickly.
Next Steps
From here you can extend this with:
- XSUAA JWT validation on the APIRule for authentication
- A
ServiceInstanceandServiceBindingfor BTP services like Destination or Connectivity - Reading bound service credentials from environment variables injected via Kubernetes secrets
Conclusion
Getting Python running on Kyma is more involved than it might first appear, but once you understand the moving parts — container registry, Istio sidecar injection, APIRule v2 syntax, and architecture compatibility — it becomes fairly repeatable. The trial environment adds some extra friction with expiring clusters and kubeconfigs, but the underlying platform is solid.
The key is understanding that Kyma is a proper Kubernetes-native environment rather than a traditional PaaS. Once that clicks, the tooling makes sense — Istio out of the box, a clean service binding model, and automatic TLS via Let’s Encrypt. There’s a learning curve, but it’s worth the investment.
“}]]
Read More Technology Blog Posts by Members articles
#abap