[[{“value”:”
Pre-Read : Standardizing AI Tool Integration with MCP: Architecture and Implementation
To get understanding on MCP server and it’s working then refer to above mentioned link.
Overview:
This document describes the integration of Model Context Protocol (MCP) server with SAP Joule Studio through Joule Agent, enabling conversational AI-driven tool invocation from within SAP Build Process Automation. This integration allows users to interact with MCP server tools directly through Joule’s conversational interface without writing custom client code.
Architecture:
Integration steps:
Step 1: MCP Server Deployment on Kyma
The MCP server is deployed on Kyma with:
Deployment Manifest (YAML/deployment.yaml)
- Containerized image with MCP server code
- Environment variables via Kubernetes secrets:
- WEATHER_API_KEY
- JENKINS_URL
- JENKINS_USER
- JENKINS_TOKEN
API Exposure (YAML/APIRule.yaml)
- Public endpoint via APIRule
- HTTP methods enabled: GET, POST, OPTIONS
- No authentication required (can be secured if needed)
Result: MCP server is accessible via a public URL, e.g.: https://mcp-server.<cluster>.kyma.ondemand.com
Step 2: SAP BTP Destination Configuration
Create a destination in SAP BTP to connect Joule Studio to the MCP server.
Configuration
- Name: MCP-Server (or custom name)
- Type: HTTP
- URL: MCP server public endpoint from Kyma APIRule
- Authentication: None (or Basic/OAuth if required)
- Additional Property: Required property to make destination visible in Joule Studio “sap-joule-studio-mcp-server = true”
Purpose
- Enables Joule Studio to discover MCP server dynamically
- Avoids hardcoding endpoints
Step 3: Joule Agent Setup in Joule Studio
Inside SAP BPA (SAP Build Process Automation):
Create / Open Joule Agent
- Initialize or open an existing project
Add MCP Server
- In agent configuration, add MCP server as tool source
- Select the destination created in Step 2 from dropdown
- Tools from MCP server will be automatically discovered and listed
Configure Model Settings
- Base Model: e.g., GPT-4o
- Advanced Model: e.g., GPT-4
- LLM Providers: OpenAI, Claude, Vertex AI, Mixtral
Deploy
- Release the agent configuration
- Deploy to environment (Dev / Test / Prod)
Step 4: Access the Deployed Agent
Once deployed, the agent is accessible via:
- Joule conversational interface
- Direct URL to the deployed environment
- Embedded SAP applications or portals
Integration workflow:
End-to-End Workflow: Triggering a Jenkins Pipeline
Step 1: Initial Connection & Setup
- Client establishes an SSE connection to the MCP server at the /mcp endpoint (Kyma-deployed)
- Client retrieves available tools via list_tools() call
- Server responds with five tool schemas:
- get_weather
- sentiment_analysis
- get_jenkins_job_info
- trigger_jenkins_pipeline
- get_jenkins_build_status
Step 2: User Query
- User enters: “Trigger ‘mcp-integrated-pipeline’ with version 2.1.0 for production environment”
- Client captures the query and adds it to message history as a user role message
Step 3: First LLM Call
- Client sends to GPT-4o
- Complete message history (system prompt + user query)
- All five tool schemas with their input parameters and descriptions
- LLM analyzes the request and follows the system prompt guideline: “Always call get_jenkins_job_info FIRST before triggering pipelines”
Step 4: First Tool Call — get_jenkins_job_info
- LLM responds with: get_jenkins_job_info(job_name=”mcp-integrated-pipeline”)
- Client extracts the tool call and forwards it to MCP server via SSE
- MCP server execution authenticates with Jenkins using environment credentials
- Sends HTTP GET request: {JENKINS_URL}/job/mcp-integrated-pipeline/api/json
- Retrieves metadata like parameter definitions, buildability status, job url
- Server response:
{
“parameters”: [
{“name”: “VERSION”, “type”: “string”},
{“name”: “ENVIRONMENT”, “type”: “string”}
],
“buildable”: true,
“url”: “https://jenkins.example.com/job/mcp-integrated-pipeline/”
}
- Client appends result to message history with role: tool
Step 5: Second LLM Call (with tool result)
- Client sends updated message history back to GPT-4o:
- Previous messages (system + user + assistant tool_call + tool result)
- LLM analyzes the tool result and user’s original query
- LLM validates:
- VERSION = 2.1.0 provided
- ENVIRONMENT = production provided
- LLM decides to proceed with pipeline trigger
Step 6: Second Tool Call — trigger_jenkins_pipeline
- LLM responds with:
trigger_jenkins_pipeline(
job_name=”mcp-integrated-pipeline”,
parameters={“VERSION”: “2.1.0”, “ENVIRONMENT”: “production”}
)
- Client extracts the tool call and sends to MCP server via SSE
- MCP server execution:
- Authenticates using JENKINS_USER and JENKINS_TOKEN
- Sends HTTP POST request to Jenkins build API
- Jenkins queues build
- Server response:
{
“status”: “success”,
“queue_id”: 12345,
“build_number”: 87,
“message”: “Pipeline mcp-integrated-pipeline triggered successfully”
}
- Client appends result to message history
Step 7: Final LLM Call (Synthesis)
- Client sends complete message history to GPT-4o
- LLM receives all context: original query + two tool executions + their results
- LLM synthesizes a natural language response
Step 8: Final Response to User
- LLM generates:
Successfully triggered mcp-integrated-pipeline with version 2.1.0 for production environment.
Build #87 has been queued (Queue ID: 12345).
The pipeline is now running.
Would you like me to check the build status?
- Client displays the response and message history is preserved for follow-up queries.
Key Flow Characteristics:
- SSE Transport: Maintains persistent connection throughout, enabling real-time bidirectional communication between client and server
- Iterative Processing: Multiple LLM calls in sequence, each building on previous tool results
- Intelligent Tool Selection: LLM autonomously decides which tools to call and in what order based on system prompt guidance
- Context Preservation: Full message history maintained across tool calls ensures coherent conversation
- Automatic Reconnection: If SSE connection drops during tool execution, client automatically reconnects and retries
- Security: Credentials in Jenkins are never exposed to the LLM, only stored in server environment/Kubernetes secrets
- Stateless Server: Each tool call is independent; client manages conversation state
- Tool Chaining: Model can call multiple tools sequentially (get_jenkins_job_info → trigger_jenkins_pipeline → get_jenkins_build_status) based on intermediate results
Extensibility: New tool to MCP server:
New tools extend MCP capabilities without requiring any changes to Joule Agent or client code. Once deployed, tools are automatically discovered and available to all connected agents.
1. Write Tool Function in [server.py]:
@McP.tool()
def my_new_tool(param1: str, param2: int) -> Dictstr, Any:
“””
Clear description for LLM explaining what this tool does.
Args:
param1 (str): Description of parameter 1
param2 (int): Description of parameter 2
Returns:
dict: Status and result data
“””
try:
Tool implementation
result = perform_operation(param1, param2)
return
Unknown macro: { “status”}
except Exception as e:
return
Unknown macro: { “status”}
2. Add Configuration (if needed):
- Dependencies: Add to requirements.txt
- API Keys: Add to .env (local) and YAML/secret.yaml + deployment.yaml (Kyma)
3. Redeploy MCP Server to Kyma:
docker build -t mcp-server:v2 .
docker push <registry>/mcp-server:v2
kubectl set image deployment/mcp-server-deployment mcp-server=<registry>/mcp-server:v2 -n mcp
4. Automatic Discovery: Joule Agent will discover the new tool on next connection/restart
- No changes needed in Joule Studio configuration
- Tool immediately available for user queries
“}]]
[[{“value”:”Pre-Read : Standardizing AI Tool Integration with MCP: Architecture and ImplementationTo get understanding on MCP server and it’s working then refer to above mentioned link.Overview:This document describes the integration of Model Context Protocol (MCP) server with SAP Joule Studio through Joule Agent, enabling conversational AI-driven tool invocation from within SAP Build Process Automation. This integration allows users to interact with MCP server tools directly through Joule’s conversational interface without writing custom client code.Architecture:Integration steps:Step 1: MCP Server Deployment on KymaThe MCP server is deployed on Kyma with:Deployment Manifest (YAML/deployment.yaml)Containerized image with MCP server codeEnvironment variables via Kubernetes secrets:WEATHER_API_KEYJENKINS_URLJENKINS_USERJENKINS_TOKENInternal service exposing port 8080API Exposure (YAML/APIRule.yaml)Public endpoint via APIRuleHTTP methods enabled: GET, POST, OPTIONSNo authentication required (can be secured if needed)Result: MCP server is accessible via a public URL, e.g.: https://mcp-server.<cluster>.kyma.ondemand.comStep 2: SAP BTP Destination ConfigurationCreate a destination in SAP BTP to connect Joule Studio to the MCP server.ConfigurationName: MCP-Server (or custom name)Type: HTTPURL: MCP server public endpoint from Kyma APIRuleAuthentication: None (or Basic/OAuth if required)Additional Property: Required property to make destination visible in Joule Studio “sap-joule-studio-mcp-server = true”PurposeEnables Joule Studio to discover MCP server dynamicallyAvoids hardcoding endpointsStep 3: Joule Agent Setup in Joule StudioInside SAP BPA (SAP Build Process Automation):Create / Open Joule AgentInitialize or open an existing projectAdd MCP ServerIn agent configuration, add MCP server as tool sourceSelect the destination created in Step 2 from dropdownTools from MCP server will be automatically discovered and listedConfigure Model SettingsBase Model: e.g., GPT-4oAdvanced Model: e.g., GPT-4LLM Providers: OpenAI, Claude, Vertex AI, MixtralDeployRelease the agent configurationDeploy to environment (Dev / Test / Prod)Step 4: Access the Deployed AgentOnce deployed, the agent is accessible via:Joule conversational interfaceDirect URL to the deployed environmentEmbedded SAP applications or portalsIntegration workflow:End-to-End Workflow: Triggering a Jenkins PipelineStep 1: Initial Connection & SetupClient establishes an SSE connection to the MCP server at the /mcp endpoint (Kyma-deployed)Client retrieves available tools via list_tools() callServer responds with five tool schemas:get_weathersentiment_analysisget_jenkins_job_infotrigger_jenkins_pipelineget_jenkins_build_statusClient fetches initial prompts via get_prompt(), which includes system instructions for the LLMSSE connection remains open for bidirectional communication throughout the sessionStep 2: User QueryUser enters: “Trigger ‘mcp-integrated-pipeline’ with version 2.1.0 for production environment”Client captures the query and adds it to message history as a user role messageStep 3: First LLM CallClient sends to GPT-4oComplete message history (system prompt + user query)All five tool schemas with their input parameters and descriptionsLLM analyzes the request and follows the system prompt guideline: “Always call get_jenkins_job_info FIRST before triggering pipelines”Step 4: First Tool Call — get_jenkins_job_infoLLM responds with: get_jenkins_job_info(job_name=”mcp-integrated-pipeline”)Client extracts the tool call and forwards it to MCP server via SSEMCP server execution authenticates with Jenkins using environment credentialsSends HTTP GET request: {JENKINS_URL}/job/mcp-integrated-pipeline/api/jsonRetrieves metadata like parameter definitions, buildability status, job urlServer response:{
“parameters”: [
{“name”: “VERSION”, “type”: “string”},
{“name”: “ENVIRONMENT”, “type”: “string”}
],
“buildable”: true,
“url”: “https://jenkins.example.com/job/mcp-integrated-pipeline/”
}Client appends result to message history with role: toolStep 5: Second LLM Call (with tool result)Client sends updated message history back to GPT-4o:Previous messages (system + user + assistant tool_call + tool result)LLM analyzes the tool result and user’s original queryLLM validates:VERSION = 2.1.0 provided ENVIRONMENT = production provided LLM decides to proceed with pipeline triggerStep 6: Second Tool Call — trigger_jenkins_pipelineLLM responds with: trigger_jenkins_pipeline(
job_name=”mcp-integrated-pipeline”,
parameters={“VERSION”: “2.1.0”, “ENVIRONMENT”: “production”}
)Client extracts the tool call and sends to MCP server via SSEMCP server execution:Authenticates using JENKINS_USER and JENKINS_TOKENSends HTTP POST request to Jenkins build APIJenkins queues buildServer response: {
“status”: “success”,
“queue_id”: 12345,
“build_number”: 87,
“message”: “Pipeline mcp-integrated-pipeline triggered successfully”
}Client appends result to message historyStep 7: Final LLM Call (Synthesis)Client sends complete message history to GPT-4oLLM receives all context: original query + two tool executions + their resultsLLM synthesizes a natural language responseStep 8: Final Response to UserLLM generates:Successfully triggered mcp-integrated-pipeline with version 2.1.0 for production environment.
Build #87 has been queued (Queue ID: 12345).
The pipeline is now running.
Would you like me to check the build status?Client displays the response and message history is preserved for follow-up queries.Key Flow Characteristics:SSE Transport: Maintains persistent connection throughout, enabling real-time bidirectional communication between client and serverIterative Processing: Multiple LLM calls in sequence, each building on previous tool resultsIntelligent Tool Selection: LLM autonomously decides which tools to call and in what order based on system prompt guidanceContext Preservation: Full message history maintained across tool calls ensures coherent conversationAutomatic Reconnection: If SSE connection drops during tool execution, client automatically reconnects and retriesSecurity: Credentials in Jenkins are never exposed to the LLM, only stored in server environment/Kubernetes secretsStateless Server: Each tool call is independent; client manages conversation stateTool Chaining: Model can call multiple tools sequentially (get_jenkins_job_info → trigger_jenkins_pipeline → get_jenkins_build_status) based on intermediate resultsExtensibility: New tool to MCP server:New tools extend MCP capabilities without requiring any changes to Joule Agent or client code. Once deployed, tools are automatically discovered and available to all connected agents.1. Write Tool Function in [server.py]:@McP.tool()
def my_new_tool(param1: str, param2: int) -> Dictstr, Any:
“””
Clear description for LLM explaining what this tool does.
Args:
param1 (str): Description of parameter 1
param2 (int): Description of parameter 2
Returns:
dict: Status and result data
“””
try:
Tool implementation
result = perform_operation(param1, param2)
return
Unknown macro: { “status”}
except Exception as e:
return
Unknown macro: { “status”}2. Add Configuration (if needed): Dependencies: Add to requirements.txt API Keys: Add to .env (local) and YAML/secret.yaml + deployment.yaml (Kyma)3. Redeploy MCP Server to Kyma:docker build -t mcp-server:v2 .
docker push <registry>/mcp-server:v2
kubectl set image deployment/mcp-server-deployment mcp-server=<registry>/mcp-server:v2 -n mcp4. Automatic Discovery: Joule Agent will discover the new tool on next connection/restartNo changes needed in Joule Studio configurationTool immediately available for user queries “}]] Read More Technology Blog Posts by SAP articles
#SAPCHANNEL