Beyond Joule Capabilities: A Knowledge Graph Driven Runtime for SAP Business AI
Share

The scaling question

Today, building AI on SAP often means building capabilities one use case at a time. A new question introduces a new YAML definition, API binding, and deployment cycle. This model works well for many scenarios. But it raises a natural question: how does it scale as the variety and cross-system reach of user questions increases?

SAP’s platform invitation is explicit: Joule as “an AI agent orchestrator,” an agent builder for custom deployments, Business Data Cloud to “supercharge AI agents with the data that matters most, from any source.” The infrastructure is there. This post explores one possible model for what to build on top of it – a working prototype running on SAP BTP Cloud Foundry, using a real A2A endpoint and a live PostgreSQL knowledge graph to answer retrieval questions without per-use-case authoring.

This is a personal exploration, not a product announcement. All views are my own. One constraint upfront: this covers read-oriented scenarios only – retrieval, explanation, analytics. Transactional write operations are out of scope.

The core idea

Joule owns the conversation. A2A owns the routing. The knowledge graph owns proven access patterns. OData owns the live data.

Each layer has one job. When a new system is added or a business term changes, only the relevant layer changes. The rest stays untouched.

This separation of concerns is the architectural backbone  and the reason the model handles question variability differently than a capability-per-use-case approach. The knowledge graph accumulates validated OData patterns over time; the A2A runtime draws on them at request time. Questions the KG already knows how to answer require no new authoring.

The example

A user says to Joule: “Show me my active purchase orders from supplier Bosch.” No capability was pre-authored for that question.

Here is what happens:

Jayesh8887_0-1776922623678.png

 

Fig 1. The KG holds the pattern. The A2A layer matches the intent and composes the filter at runtime. Joule sees a card identical to any native capability.

The KG held the base pattern for active purchase orders. The slot extractor identified “Bosch” from the utterance and composed the OData filter at runtime.

The actual call:

Jayesh8887_1-1776922683586.png

The response template stored alongside the pattern in the KG rendered those records as a Joule card. The seam between Joule and the agent is invisible to the end user.

This leads to an important distinction from how Joule capabilities are typically built today.

Why this is different from Joule capabilities

SAP’s Architecture Center documents two paths for extending Joule with custom agents: low-code via Joule Studio, and pro-code via the Bring Your Own Agent pattern –  where you expose an A2A-compliant HTTP endpoint and register it as a Joule Scenario. This prototype follows the pro-code path. What the knowledge graph adds is a runtime resolution layer on top of it: instead of one endpoint per use case, one endpoint resolves across all patterns the KG knows.

            TRADITIONAL JOULE CAPABILITY

             A2A + KNOWLEDGE GRAPH MODEL

  • One YAML file per use case, authored upfront
  • Hardcoded API, filter, and field selection
  • New question type = new capability version
  • Scale = more YAML
 
  • Patterns live in the KG, not in YAML files
  • Slot extraction composes filters at runtime
  • New supplier, year, or document: same pattern handles all
  • Scale = more usage
 

When to Use Each Approach

  • Traditional Joule Capabilities: Simple, well-defined use cases; single system queries; predictable user patterns
  • KG-Driven Runtime: Cross-system queries; high variation scenarios; exploratory business questions

The shift is from design-time authoring to runtime resolution. Governance changes accordingly: instead of “who owns this YAML?” the question becomes “who validated this OData pattern?” – answered by a domain lead approving a query-and-result pair, not a developer editing files.

Reference: SAP Architecture Center –  Integrating AI Agents with Joule documents the Bring Your Own Agent pro-code path, including A2A v0.3.0 protocol requirements and Joule Scenario registration.

The architecture – KG as runtime brain

Full architecture – A2A + Knowledge Graph on SAP BTP

Jayesh8887_2-1776922947237.png

Fig 2. End-to-end architecture: Joule routes via A2A to the self-hosted agent on BTP CF; the Knowledge Graph resolves intent and composes OData filters at runtime. Dashed boxes are planned / not yet GA.

 

Jayesh8887_0-1776924633195.png

 

Fig 3. The A2A layer is thin routing. The KG is where intelligence accumulates. The factory and runtime are the same flywheel – capability builds deposit patterns; the A2A runtime draws on them.

SAP’s pro-code reference architecture for BTP agents also names a Knowledge Graph Engine – part of HANA Cloud – as the layer for encoding semantic business relationships. The PostgreSQL KG here is a developer-accessible analog of that concept: the same idea of accumulated, semantically enriched access patterns, running on a BTP-bound managed service rather than HANA Cloud.

Users do not think in system boundaries

“What is our total spend with supplier Bosch this year?” That question touches S/4HANA purchase orders, Ariba contracts, and Concur expense claims simultaneously. Users ask it as one question. Today it typically requires three separate lookups.

A2A offers a path toward answering it in one response – routing to specialized agents per system at runtime and composing the result. Each agent knows its own domain; none needs to know about the others. The same pattern applies across the SAP portfolio: procurement in S/4HANA, sourcing in Ariba, workforce in SuccessFactors, expense in Concur, external workforce in Fieldglass. The A2A routing layer is what makes cross-system answers possible without cross-system coupling.

One practical challenge emerges quickly: how to build initial KG coverage.

BDC – a potential path to faster coverage

The model works without BDC. But adoption will be slow. Building coverage through validated OData executions alone is a gradual process, and until the KG has enough patterns, too many intents fall back to “not yet supported.”

BDC Data Products are designed to expose semantically enriched, consumption-ready models covering business definitions, domain taxonomy, and cross-entity relationships across SAP applications. If those models can be used to seed provisional KG entries, and live OData executions then promote them to validated, the cold-start problem becomes more manageable. Whether that seeding pipeline is straightforward to build depends on how BDC exposes its metadata in a given tenant, something worth exploring rather than assuming.

Raw OData tells you the shape of the data. BDC aims to tell you what the data means. That distinction could matter significantly when matching natural language to a query.

BDC also points toward a second channel – analytical questions that are difficult or impractical to answer through transactional OData alone, such as aggregated spend analysis or trend queries across periods. Those typically require pre-computed or aggregated data rather than a live API call. This is a natural conceptual extension of the same A2A routing layer, though not yet built in this prototype.

One important caveat: BDC’s semantic model could provide a useful starting point, but tenant-specific customizations, restricted entity sets, and version differences would still require live validation before any pattern is trusted at runtime. BDC reduces the cold-start problem, it does not eliminate the validation step.

The honest complexity

Three areas are harder in practice than the architecture suggests.

Intent matching

PostgreSQL full-text search on intent keywords is fast but brittle on synonyms: “open orders” and “not invoiced” can mean the same thing; “cancelled” means different things in different modules. Production-grade matching would benefit from hybrid search combining full-text with embedding-based vector similarity so that semantic proximity compensates for lexical gaps.

Slot extraction

Resolving “Bosch” from a free-text utterance to a SupplierID sounds straightforward. In practice, users type partial names, aliases, or misspellings. Production-grade slot resolution needs alias handling, fuzzy matching, and potentially a lookup against master data, making it functionally a mini-MDM problem.

Authorization

The prototype authenticates via XSUAA JWT but calls OData using a service token, meaning it does not enforce row-level authorization per user. The right production path is principal propagation through BTP Destination Service – running the OData call under the user’s own identity. This is on the roadmap but not yet implemented. The SAP Reference Architecture notes that the production Agent Gateway path uses IAS App2App tokens with named user context; the delta between that and a service token is the same gap this prototype still needs to close.

                                What works well

                                       Real limitations

  • Deterministic: every answer traces to a validated OData path – no hallucinated field names
  • Fast: KG lookup + OData call + render is sub-second
  • Auditable: every response traces to a specific pattern, entity, and filter
  • Composable: each system’s agent is independently owned and evolvable

 

  • Cold start: BDC seeding may help; without some form of pre-seeding, early coverage is thin
  • Intent matching: full-text search alone misfires on synonyms and ambiguous phrasing
  • Slot resolution: aliases and fuzzy matching needed for production-grade accuracy
  • Authorization: row-level security requires principal propagation – not yet implemented
  • Schema drift: S/4HANA upgrades can silently break existing patterns

 

 

Governance model (in the prototype): Patterns are proposed automatically when a query succeeds. A domain lead reviews and promotes to validated=true before the pattern is trusted at runtime. Validation evidence – query, response, timestamp, user context – is logged against each pattern. Patterns not executed in N days are flagged for review, not deleted.

The prototype

Running on SAP BTP Cloud Foundry: Next.js application, SAP AppRouter for XSUAA authentication, PostgreSQL knowledge graph, BTP Destination Service for OData connectivity.

 

Context: SAP’s official path for custom agents connecting to Joule is the Agent Gateway – documented in the SAP Architecture Center as exposing Joule Agents via A2A 0.3.0 with IAS App2App authentication. The Agent Gateway is not yet generally available. This prototype takes the developer path in the interim: a self-hosted A2A endpoint on BTP CF using XSUAA JWT. When the Agent Gateway reaches GA, the authentication model and registration process would change – the KG and OData runtime layer underneath would not.
Jayesh8887_1-1776923664244.png

Fig 4. * Service token today; principal propagation (user identity through to OData) is on the roadmap. When Agent Gateway reaches GA, the entry point shifts from self-hosted AppRouter to SAP-managed endpoint – the KG and OData layers remain unchanged.

Dual protocol handler

The endpoint accepts both A2A JSON-RPC 2.0 and Joule’s native agent-request format. Both converge on the same resolver:

Jayesh8887_2-1776923704969.png

 

The resolver: KG lookup, slot extraction, OData, render

Jayesh8887_3-1776923730708.png

 

The knowledge graph schema

Jayesh8887_4-1776923755264.png

Joule’s contextId field carries pagination state across conversation turns. When a user says “show more,” the skip offset and pattern ID are encoded in the context ID,  so the next turn re-runs the same OData query from the correct offset without a new KG lookup.

Closing thoughts

The pieces fit together reasonably well. The A2A protocol is documented and implemented in Joule. The OData catalog is rich and well-governed. BTP handles authentication and connectivity. The engineering gaps – vector search, slot resolution, principal propagation are real but well-defined.

For teams building on Joule and A2A today, this suggests a possible shift in focus: from defining individual capabilities to curating reusable, validated access patterns. The two approaches are not mutually exclusive ; capability builds are what populate the KG in the first place. But the longer-term value may compound differently when patterns are shared rather than siloed per use case.

The idea to sit with

This is not a replacement for Joule capabilities. It is a complementary layer – one that may make them more scalable over time.

The real question is not can this work?
It is: who owns the knowledge graph?

One possible direction for SAP AI is not simply more capabilities, but fewer, more adaptive ones backed by a shared knowledge layer that agents draw on at runtime. SAP is delivering the ingredients: Joule as orchestrator, A2A as the routing standard, BDC as the semantic foundation.

Who else is exploring this direction?


This post reflects a personal exploration and learning exercise building on SAP BTP Cloud Foundry. All views are my own and do not represent SAP’s positions, roadmap, or product direction. Code snippets are from a personal prototype and are shared for illustrative purposes only.

Co-Authored with J4C and Sonnet 🙂

 

 Beyond Joule Capabilities: A Knowledge Graph Driven Runtime for SAP Business AIA working prototype and the idea behind itIN THIS POSTThe scaling questionThe core ideaThe exampleWhy this is different from Joule capabilitiesThe architecture – KG as runtime brainUsers do not think in system boundariesBDC – a potential path to faster coverageThe honest complexityThe prototypeClosing thoughtsThe scaling questionToday, building AI on SAP often means building capabilities one use case at a time. A new question introduces a new YAML definition, API binding, and deployment cycle. This model works well for many scenarios. But it raises a natural question: how does it scale as the variety and cross-system reach of user questions increases?SAP’s platform invitation is explicit: Joule as “an AI agent orchestrator,” an agent builder for custom deployments, Business Data Cloud to “supercharge AI agents with the data that matters most, from any source.” The infrastructure is there. This post explores one possible model for what to build on top of it – a working prototype running on SAP BTP Cloud Foundry, using a real A2A endpoint and a live PostgreSQL knowledge graph to answer retrieval questions without per-use-case authoring.This is a personal exploration, not a product announcement. All views are my own. One constraint upfront: this covers read-oriented scenarios only – retrieval, explanation, analytics. Transactional write operations are out of scope.The core ideaJoule owns the conversation. A2A owns the routing. The knowledge graph owns proven access patterns. OData owns the live data.Each layer has one job. When a new system is added or a business term changes, only the relevant layer changes. The rest stays untouched.This separation of concerns is the architectural backbone  and the reason the model handles question variability differently than a capability-per-use-case approach. The knowledge graph accumulates validated OData patterns over time; the A2A runtime draws on them at request time. Questions the KG already knows how to answer require no new authoring.The exampleA user says to Joule: “Show me my active purchase orders from supplier Bosch.” No capability was pre-authored for that question.Here is what happens: Fig 1. The KG holds the pattern. The A2A layer matches the intent and composes the filter at runtime. Joule sees a card identical to any native capability.The KG held the base pattern for active purchase orders. The slot extractor identified “Bosch” from the utterance and composed the OData filter at runtime.The actual call:The response template stored alongside the pattern in the KG rendered those records as a Joule card. The seam between Joule and the agent is invisible to the end user.This leads to an important distinction from how Joule capabilities are typically built today.Why this is different from Joule capabilitiesSAP’s Architecture Center documents two paths for extending Joule with custom agents: low-code via Joule Studio, and pro-code via the Bring Your Own Agent pattern –  where you expose an A2A-compliant HTTP endpoint and register it as a Joule Scenario. This prototype follows the pro-code path. What the knowledge graph adds is a runtime resolution layer on top of it: instead of one endpoint per use case, one endpoint resolves across all patterns the KG knows.            TRADITIONAL JOULE CAPABILITY             A2A + KNOWLEDGE GRAPH MODELOne YAML file per use case, authored upfrontHardcoded API, filter, and field selectionNew question type = new capability versionScale = more YAML Patterns live in the KG, not in YAML filesSlot extraction composes filters at runtimeNew supplier, year, or document: same pattern handles allScale = more usage When to Use Each ApproachTraditional Joule Capabilities: Simple, well-defined use cases; single system queries; predictable user patternsKG-Driven Runtime: Cross-system queries; high variation scenarios; exploratory business questionsThe shift is from design-time authoring to runtime resolution. Governance changes accordingly: instead of “who owns this YAML?” the question becomes “who validated this OData pattern?” – answered by a domain lead approving a query-and-result pair, not a developer editing files.Reference: SAP Architecture Center –  Integrating AI Agents with Joule documents the Bring Your Own Agent pro-code path, including A2A v0.3.0 protocol requirements and Joule Scenario registration.The architecture – KG as runtime brainFull architecture – A2A + Knowledge Graph on SAP BTPFig 2. End-to-end architecture: Joule routes via A2A to the self-hosted agent on BTP CF; the Knowledge Graph resolves intent and composes OData filters at runtime. Dashed boxes are planned / not yet GA.  Fig 3. The A2A layer is thin routing. The KG is where intelligence accumulates. The factory and runtime are the same flywheel – capability builds deposit patterns; the A2A runtime draws on them.SAP’s pro-code reference architecture for BTP agents also names a Knowledge Graph Engine – part of HANA Cloud – as the layer for encoding semantic business relationships. The PostgreSQL KG here is a developer-accessible analog of that concept: the same idea of accumulated, semantically enriched access patterns, running on a BTP-bound managed service rather than HANA Cloud.Users do not think in system boundaries”What is our total spend with supplier Bosch this year?” That question touches S/4HANA purchase orders, Ariba contracts, and Concur expense claims simultaneously. Users ask it as one question. Today it typically requires three separate lookups.A2A offers a path toward answering it in one response – routing to specialized agents per system at runtime and composing the result. Each agent knows its own domain; none needs to know about the others. The same pattern applies across the SAP portfolio: procurement in S/4HANA, sourcing in Ariba, workforce in SuccessFactors, expense in Concur, external workforce in Fieldglass. The A2A routing layer is what makes cross-system answers possible without cross-system coupling.One practical challenge emerges quickly: how to build initial KG coverage.BDC – a potential path to faster coverageThe model works without BDC. But adoption will be slow. Building coverage through validated OData executions alone is a gradual process, and until the KG has enough patterns, too many intents fall back to “not yet supported.”BDC Data Products are designed to expose semantically enriched, consumption-ready models covering business definitions, domain taxonomy, and cross-entity relationships across SAP applications. If those models can be used to seed provisional KG entries, and live OData executions then promote them to validated, the cold-start problem becomes more manageable. Whether that seeding pipeline is straightforward to build depends on how BDC exposes its metadata in a given tenant, something worth exploring rather than assuming.Raw OData tells you the shape of the data. BDC aims to tell you what the data means. That distinction could matter significantly when matching natural language to a query.BDC also points toward a second channel – analytical questions that are difficult or impractical to answer through transactional OData alone, such as aggregated spend analysis or trend queries across periods. Those typically require pre-computed or aggregated data rather than a live API call. This is a natural conceptual extension of the same A2A routing layer, though not yet built in this prototype.One important caveat: BDC’s semantic model could provide a useful starting point, but tenant-specific customizations, restricted entity sets, and version differences would still require live validation before any pattern is trusted at runtime. BDC reduces the cold-start problem, it does not eliminate the validation step.The honest complexityThree areas are harder in practice than the architecture suggests.Intent matchingPostgreSQL full-text search on intent keywords is fast but brittle on synonyms: “open orders” and “not invoiced” can mean the same thing; “cancelled” means different things in different modules. Production-grade matching would benefit from hybrid search combining full-text with embedding-based vector similarity so that semantic proximity compensates for lexical gaps.Slot extractionResolving “Bosch” from a free-text utterance to a SupplierID sounds straightforward. In practice, users type partial names, aliases, or misspellings. Production-grade slot resolution needs alias handling, fuzzy matching, and potentially a lookup against master data, making it functionally a mini-MDM problem.AuthorizationThe prototype authenticates via XSUAA JWT but calls OData using a service token, meaning it does not enforce row-level authorization per user. The right production path is principal propagation through BTP Destination Service – running the OData call under the user’s own identity. This is on the roadmap but not yet implemented. The SAP Reference Architecture notes that the production Agent Gateway path uses IAS App2App tokens with named user context; the delta between that and a service token is the same gap this prototype still needs to close.                                What works well                                       Real limitationsDeterministic: every answer traces to a validated OData path – no hallucinated field namesFast: KG lookup + OData call + render is sub-secondAuditable: every response traces to a specific pattern, entity, and filterComposable: each system’s agent is independently owned and evolvable Cold start: BDC seeding may help; without some form of pre-seeding, early coverage is thinIntent matching: full-text search alone misfires on synonyms and ambiguous phrasingSlot resolution: aliases and fuzzy matching needed for production-grade accuracyAuthorization: row-level security requires principal propagation – not yet implementedSchema drift: S/4HANA upgrades can silently break existing patterns  Governance model (in the prototype): Patterns are proposed automatically when a query succeeds. A domain lead reviews and promotes to validated=true before the pattern is trusted at runtime. Validation evidence – query, response, timestamp, user context – is logged against each pattern. Patterns not executed in N days are flagged for review, not deleted.The prototypeRunning on SAP BTP Cloud Foundry: Next.js application, SAP AppRouter for XSUAA authentication, PostgreSQL knowledge graph, BTP Destination Service for OData connectivity. Context: SAP’s official path for custom agents connecting to Joule is the Agent Gateway – documented in the SAP Architecture Center as exposing Joule Agents via A2A 0.3.0 with IAS App2App authentication. The Agent Gateway is not yet generally available. This prototype takes the developer path in the interim: a self-hosted A2A endpoint on BTP CF using XSUAA JWT. When the Agent Gateway reaches GA, the authentication model and registration process would change – the KG and OData runtime layer underneath would not.Fig 4. * Service token today; principal propagation (user identity through to OData) is on the roadmap. When Agent Gateway reaches GA, the entry point shifts from self-hosted AppRouter to SAP-managed endpoint – the KG and OData layers remain unchanged.Dual protocol handlerThe endpoint accepts both A2A JSON-RPC 2.0 and Joule’s native agent-request format. Both converge on the same resolver: The resolver: KG lookup, slot extraction, OData, render The knowledge graph schemaJoule’s contextId field carries pagination state across conversation turns. When a user says “show more,” the skip offset and pattern ID are encoded in the context ID,  so the next turn re-runs the same OData query from the correct offset without a new KG lookup.Closing thoughtsThe pieces fit together reasonably well. The A2A protocol is documented and implemented in Joule. The OData catalog is rich and well-governed. BTP handles authentication and connectivity. The engineering gaps – vector search, slot resolution, principal propagation are real but well-defined.For teams building on Joule and A2A today, this suggests a possible shift in focus: from defining individual capabilities to curating reusable, validated access patterns. The two approaches are not mutually exclusive ; capability builds are what populate the KG in the first place. But the longer-term value may compound differently when patterns are shared rather than siloed per use case.The idea to sit withThis is not a replacement for Joule capabilities. It is a complementary layer – one that may make them more scalable over time.The real question is not can this work?It is: who owns the knowledge graph?One possible direction for SAP AI is not simply more capabilities, but fewer, more adaptive ones backed by a shared knowledge layer that agents draw on at runtime. SAP is delivering the ingredients: Joule as orchestrator, A2A as the routing standard, BDC as the semantic foundation.Who else is exploring this direction?This post reflects a personal exploration and learning exercise building on SAP BTP Cloud Foundry. All views are my own and do not represent SAP’s positions, roadmap, or product direction. Code snippets are from a personal prototype and are shared for illustrative purposes only.Co-Authored with J4C and Sonnet 🙂 Read More Technology Blog Posts by SAP articles 

#SAPCHANNEL

By ali

Leave a Reply