Active Knowledge Modelling Methodology for Agent-Native Knowledgebases

Community Article Published April 26, 2026

Mahmudur R Manna

Preprint - April 26, 2026

DOI: 10.5281/zenodo.19782389 - License: CC BY-NC-SA 4.0

Based on the book: The World of Active Things

Preprint Note

This is a public preprint of ongoing journal-targeted work. It presents the Active Knowledge Modelling Methodology as a self-contained research paper while making its claim boundaries, related-work position, methodology, evaluation framing, and limitations explicit. Companion specification, schema, and reproducibility materials may be released separately.

Abstract

Modern artificial intelligence (AI) agents increasingly use tools, retrieval, workflows, and long-context reasoning, yet their operational worlds are often exposed as fragments: rows, documents, messages, statuses, dashboards, and hidden process logic. Relational systems gained a modelling grammar through the entity-relationship diagram (ERD). Enterprise AI still lacks an equivalent practical standard for agent-operable knowledge. This paper proposes the Active Knowledge Modelling Methodology (AKMM) for modelling knowledge as a world of explicit, stateful, active things.

AKMM begins from the claim that an active thing becomes knowable through boundary and that its boundary becomes intelligible through lifecycle. A Knowledgebase is therefore treated as an authored operational knowledge world for agents. AKMM defines Knowledgebase, Active Thing Type, Active Thing Instance, Identity Index, Lifecycle Memory, and Canonical Events as core structures. Identity Index exposes the active skeleton of a thing before action begins; Lifecycle Memory records how the thing has lived rather than merely what changed; Canonical Events preserve shared occurrences and their per-thing consequences.

The paper advances two bounded claims. First, AKMM is universal methodologically: primitives of boundary, lifecycle, state, event, transition, relation, and purpose recur where knowledge concerns active things; this is not empirical exhaustion. Second, Agent-Native means that the Knowledgebase already exposes the identity, lived path, lawful movement, relation, impact, and monitoring surfaces an agent needs. The empirical program includes one full order-processing proof of concept and two lighter transfer PoCs. Current results support AKMM as a serious candidate foundation, not an industrially validated standard.

Keywords

knowledge engineering; agent-native knowledgebase; lifecycle modelling; active things; case modelling; enterprise AI; knowledge representation

Highlights

  • Active Knowledge Modelling Methodology composes knowledge around active things.
  • Identity Index exposes bounded active identity before action begins.
  • Lifecycle Memory records lived movement rather than generic history.
  • Canonical Events preserve shared occurrences and per-thing consequences.
  • Evidence is bounded methodology evidence, not industrial validation.

1. Introduction

The current wave of enterprise AI is strong in model capability and weak in world composition.

Agents can call tools, retrieve documents, follow workflows, and produce increasingly coherent outputs. Yet when these agents enter real organizational worlds, they often do so through weakly composed knowledge. A claim is spread across forms, comments, attachments, emails, and statuses. A contract is scattered across documents, approvals, amendments, and payment records. An order is distributed across tables, files, messages, and operational dashboards. The agent does not meet the business case as a thing. It meets fragments and must reconstruct the case at runtime.

That reconstruction is expensive. It increases search, retrieval, inference, checking, and recovery effort. It weakens next-step reasoning. It makes monitoring brittle. It makes handoff poor. It hides state machines in code, workflows, or scattered systems rather than exposing them as knowledge. Much of the current enterprise AI stack compensates for this weakness through better prompts, broader retrieval, larger context windows, and more orchestration. These help, but they do not remove the underlying modelling gap.

This paper argues that the market lacks a practical standard methodology for agent-operable knowledge modelling. Relational systems had ERD. Enterprise AI has no equivalent practical standard for modelling business cases and other operational realities as explicit, stateful, agent-operable things. Because explicit modelling is hard, the market often falls back to naive RAG, chunking, and hidden process logic.

AKMM is proposed as a response to that gap. It is a methodology for modelling knowledge as a world of explicit, stateful, active things so that agents can operate on that world directly. AKMM does not begin from tables, documents, edges, or chunks. It begins from the Active Thing and its lifecycle-bearing reality.

The concepts of Active Things, Boundary Analysis, and Lifecycle Analysis were introduced in earlier monograph work by the present author (Manna, 2026). This paper does not assume familiarity with that book. Instead, it restates the minimum concepts needed for the current argument and extends them into a practical methodology, AKMM, together with a bounded empirical program. A fuller normative companion specification is maintained separately as companion material rather than required reading.

This paper makes six contributions.

  1. It proposes AKMM as a missing methodology for agent-operable knowledge modelling.
  2. It defines explicit stateful active reality as the modelling basis of a Knowledgebase.
  3. It defines Identity Index, Lifecycle Memory, and Canonical Events as core AKMM structures.
  4. It defines the knowledge semantics exposed by those structures.
  5. It defines what it means for a Knowledgebase to be Agent-Native.
  6. It provides bounded empirical evidence through one full anchor domain and two lighter transfer domains.

2. The Missing Standard And Neighbouring Traditions

The practical history of data systems includes clear modelling traditions for some domains and weak ones for others. In relational practice, ERD became a durable, practical modelling language (Chen, 1976). It did not solve every problem, but it gave practitioners a common grammar for thinking about structure before implementation. That standard helped relational systems become legible, teachable, and reusable across teams.

Enterprise AI has no comparable standard for modelling knowledge in the form agents actually need. This is not because adjacent modelling traditions do not exist. Ontology and Resource Description Framework (RDF) traditions provide formal approaches to conceptual and assertion-level representation (Gruber, 1995; W3C, 2014), and Web Ontology Language (OWL) and knowledge-graph work extend that tradition toward richer graph-based knowledge representation and reasoning (W3C OWL Working Group, 2012; Hogan et al., 2021). Business process and case management standards such as Business Process Model and Notation (BPMN), Case Management Model and Notation (CMMN), and Decision Model and Notation (DMN) provide process, case, and decision formalisms (Object Management Group, 2011; Object Management Group, 2016; Object Management Group, 2019). Artifact-centric business process work is especially close to AKMM because it places business artifacts and their lifecycles at the center of operational modelling (Nigam and Caswell, 2003; Bhattacharya et al., 2007; Hull, 2008). Process mining and object-centric event-log research provide ways to analyze event traces and multi-object event participation (Augusto et al., 2017; Berti et al., 2024). The claim of this paper is narrower and different: none of these has yet become a practical standard for modelling agent-operable enterprise knowledge as bounded stateful active things in the way AKMM proposes.

The point is not that neighbouring traditions are weak or irrelevant. Many are highly effective for the problems they were built to solve. The deeper distinction is that all serious approaches pay a composition toll somewhere. The more useful question is where that effort is paid, what it primarily produces, and how much case reconstruction still remains at runtime. Table 1 gives a high-level qualitative comparison across the main approach families enterprises currently rely on.

This paper therefore does not position AKMM as a replacement for ontology, graph modelling, process modelling, case management, decision modelling, artifact-centric process models, event logs, object-centric process mining, or business intelligence. AKMM is closest in spirit to artifact-centric and object-centric traditions, but it gives a different methodological answer to the agent-facing knowledgebase problem. Artifact-centric BPM asks how business operations may be specified around evolving artifacts. Object-centric event-log work asks how events involving multiple objects can be represented and analyzed. AKMM asks how an operational world should be authored so that an agent can meet a bounded active thing through identity exposure, lived memory, lawful movement, shared event consequence, relation, and purpose before reconstructing it from fragments.

Table 1. Qualitative comparison of where composition effort is paid.

Approach family Main authoring or ingestion effort Primarily produces Typical runtime case reconstruction Resulting modelling yield
RAG / vector retrieval practice document cleaning, chunking, metadata, embedding, retrieval tuning searchable fragment corpus high good retrieval surface, weak explicit case model
Graph / ontology modelling schema design, extraction, entity resolution, relation modelling entity-relation semantic network moderate strong relation semantics, weaker lifecycle-centered case knowledge
Workflow / case-management systems process modelling, rule configuration, task routing, status design process and control surface moderate strong movement control, weaker full knowledgebase semantics
Event-sourced architectures event design, normalization, stream discipline, projection logic replayable event history moderate strong event truth, weaker direct case entry and lived-memory surfaces
AKMM explicit active-case composition: boundary, lifecycle, state, events, relations, purpose agent-operable active knowledgebase reduced active-knowledge grammar plus Identity Index, Lifecycle Memory, Canonical Events, and reusable projections

Read in this way, AKMM should not be judged by whether it avoids composition effort. It does not. The difference is that the same effort is meant to yield a proper knowledgebase rather than only a retrieval surface, a relation surface, a workflow surface, or an event log. That is why the paper argues for AKMM as a modelling standard rather than only another storage or orchestration choice.

Because this modelling gap remains open, the market often defaults to the easiest available substitute. Chunking is easier than modelling. Retrieval is easier than composition. Prompting over fragments is easier than making state and lawful movement explicit. This is one reason naive RAG and retrieval-heavy enterprise AI practices spread so easily: they are easier to begin with than explicit knowledge modelling (Bruckhaus, 2024; Yu et al., 2024).

The result is not just a technical inconvenience. It shapes the whole quality of enterprise AI. If the world remains weakly modelled, agents must continually infer what should already have been composed. The more they must reconstruct, the more expensive and brittle they become.

The claim of this paper is therefore not only that AKMM is useful. It is that a missing standard matters materially for the quality, cost, and governability of enterprise AI.

3. Foundational Claim

The foundational claim of AKMM is that operational reality is better modelled as a world of Active Things than as a flat collection of attributes, documents, edges, statuses, or retrieved fragments.

For the purposes of this paper, an Active Thing is a thing that becomes knowable through boundary and whose boundary becomes intelligible through lifecycle. Operationally, it is a bounded thing whose identity, states, events, relations, and end conditions matter for reasoning or action. Boundary Analysis is the analytic task of determining what belongs within the thing as that thing. Lifecycle Analysis is the analytic task of understanding how the thing begins, changes, persists, relates, and ends within that boundary. These concepts are restated here in compact form so that the paper remains self-contained, even though their broader philosophical development appears in prior monograph work (Manna, 2026).

An Active Thing is not merely a category label or a record. It is something that:

  • begins
  • moves
  • changes
  • relates
  • persists
  • and ends

Its boundary becomes knowable through lifecycle. Its operational intelligibility depends on the explicit recognition of state, transition, event, relation, and purpose. This is why AKMM treats state-machine logic as part of knowledge. The question is not whether software can implement a state machine somewhere. The question is whether the thing itself is made explicit enough that an agent can know what state it is in, what transitions are lawful, what events matter, and what completion or failure mean.

3.1 What About Things That Appear Non-Active

The question is natural: if AKMM is about Active Things, then what about things that appear non-active? AKMM takes a stronger position. Knowledge itself is Active, and there are no non-active things in knowledge. Whatever matters as a thing in knowledge already has boundary, lifecycle, relation, and consequence, even when current systems expose it only as a frozen snapshot.

A constant, for example, does not stand outside activity. It enters declaration, adoption, use, revision, deprecation, and retirement. An attribute or property does not stand outside activity. It enters definition, assignment, validation, update, override, and removal. A rule does not stand outside activity. It enters authorship, approval, application, exception, revision, and withdrawal. Data does not stand outside activity. It enters creation, linkage, correction, enrichment, supersession, and archive.

What appears non-active is therefore not outside AKMM. It is knowledge whose boundary and lifecycle have been hidden by representation. AKMM does not add activity from outside. It makes the Active Thing more faithfully knowable.

From this view, the problem in many current enterprise systems is not the absence of data. It is the absence of faithful composition. Data exists. Events exist. Statuses exist. But the business case is not made explicit as a first-class stateful thing. The world is therefore available only indirectly.

AKMM responds to this by making stateful active reality first-class knowledge. In AKMM, the Knowledgebase is not a passive repository. It is an authored operational knowledge world composed of Active Things and their lived movement.

4. AKMM Core Methodology

AKMM is not only a vocabulary for naming structures. It is a modelling procedure that begins by deciding what bounded world is being built, what active things matter inside that world, and how those things will be made operable for later reasoning and action. For that reason, the root object of an AKMM modelling exercise is the Knowledgebase itself. The Knowledgebase is the bounded knowledge world being built for a concern, and AKMM also models that bounded world as a root Active Thing so that its boundary, purpose, and lifecycle can be represented explicitly. It is therefore not merely a schema label or a folder of sources.

Within that Knowledgebase, the modeller identifies Active Thing types and Active Thing instances. A type defines the pattern of life for a class of things, while an instance carries the actual path lived by one occurrence. AKMM requires each type to be made intelligible through seven features: Start, End, States, Transitions, Events, Relations, and Purpose. These features are not decorative metadata. They are the minimum analytic structure through which the thing becomes knowable as an active thing rather than a record or document cluster. Start and End determine how the thing enters and leaves scope. States and Transitions make movement explicit. Events identify what matters to that movement. Relations make clear how the thing stands with other active things. Purpose gives direction to the whole lifecycle and prevents the model from collapsing into an empty change log.

Once the type structure is clear, AKMM requires two complementary representations of each thing. The first is Identity Index. Identity Index is the bounded entry structure through which the thing becomes substantially knowable at first contact. It is not a loose summary or a convenience view. It is the compact active exposure of the thing's skeleton: how it enters scope, how it closes, what state matters now, which transitions are lawful or significant, which events are major, which relations are currently active, and what purpose gives direction to its movement. Identity Index changes the entry problem for both humans and agents. Instead of beginning with a document pile, a status code, or a long operational trace, the reader begins with the thing itself as a bounded active reality.

The second representation is Lifecycle Memory. If Identity Index gives the active skeleton, Lifecycle Memory gives the lived path. It is not generic history, undifferentiated audit residue, or a list of field mutations. It remembers meaningful movement in terms faithful to the thing: how it entered, what state it occupied, what event mattered, what transition occurred, what relation became active, and what direction was advanced, blocked, or completed. This distinction is central to AKMM. The methodology is not satisfied by a system that can tell us that something changed. It must be able to tell us how the thing lived through that change.

This thing-centered memory still does not remove the need for a shared event layer. One real-world event may participate in several lifecycles at once, and AKMM therefore requires Canonical Events whenever meaningful shared movement must retain one identity across many things. AKMM is not merely an event-store ontology, because the primary unit remains the Active Thing, but shared-event composition is still necessary. A canonical event preserves one occurrence while recording many per-thing consequences. That makes cross-thing impact tracing, relation activation, and participant-role reasoning possible without forcing the model to duplicate the same event as many separate authored truths.

From these core structures, AKMM derives projections such as Current State and Relation Surface. These projections are useful because they make entry, traversal, and operational retrieval cheaper, but they are not the first truth of the model. The methodological rule is to record once at the level of meaningful movement and project many times for use. In this way, AKMM separates canonical active knowledge from the views needed for search, monitoring, dashboards, and agent access.

Taken together, these structures are not merely modelling containers. They expose different ways of knowing the same active thing. Identity Index gives bounded first-contact knowledge. Lifecycle Memory gives lived temporal knowledge. Canonical Events give shared-occurrence and impact knowledge. Derived projections make some of that knowledge cheaper to enter operationally. The next section therefore addresses a question that the paper must answer explicitly: in what sense is AKMM actually a methodology of knowledge modelling rather than only a way to structure cases?

For reviewability, the current methodological contract can be summarized as follows. A minimally conforming AKMM model should define a bounded Knowledgebase with purpose; identify Active Thing types and instances; define each type through Start, End, States, Transitions, Events, Relations, and Purpose; expose each instance through an Identity Index when it must be encountered operationally; record meaningful instance movement through Lifecycle Memory; preserve shared occurrences through Canonical Events with participant roles and per-thing consequences; derive Current State and Relation Surface as projections rather than separate truths; and retain source references sufficient for grounded answers and calibrated uncertainty. These requirements are intentionally stated as a methodological contract, not yet as a final formal standard or complete notation.

The same model can also be declared textually. The excerpt below is an authored schema fragment rather than runtime data.

schema:
  id: "akmm.order_processing.v1"
  name: "Order Processing Domain Schema"
  schema_kind: "domain_schema"

knowledgebase:
  id: "kb.order_processing"
  name: "Order Processing Knowledgebase"
  purpose: >
    Make order-processing cases directly operable for agents and humans.

active_thing_types:
  - id: "Order"
    purpose: >
      Move a customer order toward lawful fulfillment,
      delivery, or closure.
    start:
      event_types: ["order_created"]
    end:
      states: ["delivered", "cancelled"]
      event_types: ["shipment_delivered", "order_cancelled"]

Listing 1. Compact authored schema excerpt for the worked Order Processing model.

The same provisional notation can also be transferred into lighter domains without reproducing the full density of the Order Processing example. Figure 3 shows two compact transfer-model previews authored through the same boundary, Active Thing card, state-pill, relation-arrow, and Canonical Event inset grammar.

5. Knowledge Semantics of AKMM

AKMM is knowledge modelling not only because it defines a modelling grammar, but because it changes what a system can know about an active thing. A weakly composed enterprise system may already hold rows, documents, comments, messages, status values, and timestamps. Those artifacts still carry information. But by themselves they do not yet make the thing knowable in a serious sense. They describe fragments, residues, and traces. AKMM attempts to compose those fragments into bounded active knowledge: knowledge of what the thing is, how it has lived, what movement is lawful, what event changed it, what other things it stands with, what shared occurrence affected it, and what direction its movement carries toward.

This is why the distinction between residue and remembered life matters so much. Ordinary history often records that something changed. It may show a field mutation, a note, a message, or an update timestamp. AKMM asks for something stronger: how the thing itself lived through that change. In that sense, the Knowledgebase is not only storing descriptions of activity. It is composing active things into more faithful forms of knowledge.

The core knowledge families exposed by AKMM are summarized below.

Table 2. Knowledge Families Exposed by AKMM.

Knowledge family Primary AKMM structure Example question
Identity knowledge Identity Index What is this thing?
Temporal knowledge Lifecycle Memory How did it get here?
Transition and trigger knowledge States, Transitions, Lifecycle Memory Which event changed its path?
Impact knowledge Canonical Events Which things were affected by one event?
Relational knowledge Relations, Relation Surface What is it currently related to?
Purpose knowledge Purpose, lifecycle direction What is this movement carrying toward?
Comparative and analogical knowledge Shared seven-feature grammar How does this case differ from another, or resemble a case in another domain?
Monitoring and anomaly knowledge States, transitions, expected events, projections What is stuck, missing, unlawful, or overdue?

Identity knowledge enters through Identity Index. The thing is no longer encountered as a loose fragment or as a document pile. It is encountered as a bounded active skeleton with start, end, current state, major events, current relations, and direction of movement. Temporal knowledge enters through Lifecycle Memory. This is the lived-path layer of the methodology: not a generic audit trail, but memory of how the thing entered, moved, stalled, changed, and reached its present condition. Transition and trigger knowledge then become possible because state and movement are explicit. AKMM can answer not only that the thing changed, but which event mattered, what transition occurred, and whether that movement was lawful, blocked, or terminal. In operational settings this becomes a form of causal-temporal insight: not a grand theory of universal causation, but a grounded account of what event triggered what movement in the life of the thing.

Impact knowledge appears when one occurrence matters across many things. Canonical Events preserve that one occurrence while recording distinct per-thing consequences. This is why Canonical Events are not merely a storage convenience. They make cross-thing impact tracing possible without multiplying one authored occurrence into many separate truths. Relational knowledge and purpose knowledge deepen that picture further. Relations show which other active things matter to the case now, while purpose makes the movement intelligible as more than a bare sequence. Without purpose, movement risks becoming only chronology. With purpose, the model can distinguish movement toward fulfillment, restoration, settlement, diagnosis, closure, escalation, or collapse.

Comparative and analogical knowledge are especially important for the Universal claim. Because very different domains are modelled through the same active-knowledge grammar, two cases can be compared without being flattened into sameness. Two orders can be compared by state, path, blockage, and closure pattern. A claim can be compared to a contract or an incident not because they are identical, but because they become knowable through the same bounded active skeleton. Monitoring and anomaly knowledge follow from the same foundations. Once expected states, transitions, events, and relation changes are explicit, the model can surface what is missing, stuck, unlawful, delayed, or escalated.

This is the point at which AKMM becomes more than a structural methodology. Structure modelling tells us how a representation is organized. Process modelling tells us how movement may occur. Event logs tell us that occurrences happened. AKMM is knowledge modelling because it composes an active thing into bounded identity, lived time, lawful movement, shared impact, relation, and fulfillment direction. In this form, the thing becomes more knowable before retrieval, reasoning, and action begin.

6. What Agent-Native Means

Agent-Native is the central practical claim of AKMM, but after the previous section it can be stated more precisely. In this paper, Agent-Native does not mean merely that an agent can access data, call tools, or retrieve documents. It means that the Knowledgebase already exposes the knowledge surfaces the agent needs: bounded identity, lived path, lawful movement, event-triggered change, relation, impact, and monitoring-relevant state. The agent therefore does not have to reconstruct those forms of knowledge from fragments each time it touches a case.

This is why explicit state-machine logic matters so much. AKMM does not reduce reality to a simplistic finite-state chart, but it does insist that the thing carry enough explicit stateful structure that action can be grounded in states, transitions, events, relations, and closure conditions rather than inferred from fragments. The agent should not have to reconstruct identity knowledge from documents, temporal knowledge from timestamps, or impact knowledge from scattered residues. It should enter a world in which the case is already composed.

In practice, AKMM makes that possible through the combination of Identity Index, Lifecycle Memory, and Canonical Events. Identity Index gives bounded entry, Lifecycle Memory gives lived-path truth, and Canonical Events preserve shared occurrence and cross-thing impact. Together they allow the Knowledgebase to function as an operational knowledge world for agents rather than a passive repository that agents must continually reinterpret.

7. Why Universal Means Methodological Recurrence

The universal claim of AKMM is intentional, but it must be read correctly. It is not a claim that every possible domain has already been empirically exhausted. It is a methodological claim: wherever knowledge concerns an active thing, the same deep modelling primitives recur. Boundary, identity, lifecycle, state, transition, event, relation, and purpose are not accidental features of one enterprise slice. They are the recurring grammar through which active reality becomes knowable. In that sense, AKMM proposes a general grammar for active knowledge in the way ERD provided a general grammar for relational data modelling. The universality of AKMM therefore comes from the recurrence of active reality, not from the exhaustion of empirical cases.

This distinction matters for review. The current paper uses Universal in three different senses that should not be collapsed. First, there is a foundational universality claim: active things become knowable through boundary and lifecycle. Second, there is a methodological universality claim: the same seven-feature grammar and modelling procedure can be attempted wherever the knowledge target is an active thing. Third, there is empirical universality: proof that AKMM has been validated across all relevant domains. This paper claims the first two and does not claim the third.

The recurring primitives are the seven features already described: Start, End, States, Transitions, Events, Relations, and Purpose. The recurring procedure matters as much as the primitives: define the bounded Knowledgebase, identify the active things in scope, author their seven features, expose bounded entry through Identity Index, remember lived path through Lifecycle Memory, preserve shared occurrence through Canonical Events, and derive operational projections. This is what makes cross-domain comparison and analogy possible without flattening domains into sameness. The empirical program in this paper is therefore not the source of universality itself. It is an initial demonstration that the universal grammar can be instantiated across materially different domains.

The paper supports this claim at three levels. First, the modelling notation itself transfers. Figure 3 shows the same provisional notation applied not only to Order Processing but also to Manufacturing Maintenance and Insurance Claim Adjudication. Second, Figure 4 and Table 3 show that the same active-knowledge grammar can map five materially different cases across manufacturing, farming, orbital operations, clinical treatment, and insurance adjudication. Third, bounded executed transfer shows that the same AKMM artifact structure can be realized beyond the anchor domain. The point of the broader mapping pack is not to claim five full empirical programs. It is to show that AKMM is not trapped inside one order-to-cash slice or one kind of office workflow.

Table 3. Compact five-case universality matrix through the same AKMM grammar.

Case Start End State arc Event field Relation field Purpose
Manufacturing anomaly or inspection finding closed or replacement decision detected -> diagnosed -> waiting_parts / repair -> validated alert, diagnosis, part receipt, repair, validation asset, technician, spare part restore safe operation
Farming planting or season admission harvest, crop loss, or abandonment prepared -> planted -> vegetative -> stressed / recovering -> harvested / failed planting, pest detection, irrigation outage, treatment, harvest field, crop variety, irrigation, weather move toward healthy growth and yield
Orbital anomaly detection or safe-mode entry recovered, degraded continuation, or mission loss nominal -> anomalous -> safe_mode -> diagnosis -> recovery -> recovered / failed telemetry anomaly, uplink, reset, communications restored spacecraft, subsystem, ground station, mission team preserve mission continuity
Clinical diagnosis or admission resolved, chronic continuation, or death diagnosed -> treatment -> stabilizing / remission / relapse diagnosis, therapy start, adverse reaction, discharge, relapse patient, clinician, regimen, lab result move toward recovery or controlled continuation
Insurance claim submission paid, denied final, withdrawn, or litigated filed -> review -> evidence / approved / denied -> paid / appealed submission, evidence request, approval, denial, payment, appeal claimant, policy, adjuster, evidence bundle, payment reach a lawful settlement outcome

In addition to the full Order Processing anchor, the same AKMM artifact structure and deterministic query path have been executed in Manufacturing Equipment Maintenance and Insurance Claim Adjudication. Both transfer PoCs ingest the same four source classes (SQL DML, CSV, JSON, PDF), build Identity Index, Lifecycle Memory, Canonical Events, Current State, and Relation Surface, and then answer domain-native benchmark questions through the same query logic. In both transfer domains AKMM achieved a rubric pass rate of 1.0, while the no_lifecycle_memory ablation dropped to 0.3333. This does not prove all domains. It does show that AKMM moves beyond a single-domain conceptual claim into bounded executed transfer.

Universality therefore belongs first to the methodology and secondarily to the evidence assembled here. The paper argues that the same active-knowledge grammar can travel far beyond one enterprise slice, and it supports that argument through one deep anchor domain, two executed transfers, and one broader mapping pack.

8. Why Enterprise First

If AKMM is larger than enterprise, it is reasonable to ask why this paper begins with enterprise business cases. The answer is that enterprise cases are the right first proving ground because they make the current market failure both visible and measurable. They are strongly stateful, operationally sensitive, costly to get wrong, rich in fragmented records, and full of hidden state machines. They also demand exactly the knowledge families AKMM is meant to expose: identity, path, trigger, impact, relation, lawfulness, anomaly, and closure. They are therefore exactly the setting in which naive RAG and retrieval-heavy agent design become most visibly weak.

Orders, claims, contracts, incidents, approvals, and similar cases force questions that current systems often answer poorly: What is the real state of this case? How did it get here? What action is lawful now? What is blocked? Which event changed the path? What should be monitored continuously? These are precisely the kinds of questions AKMM is designed to make answerable through explicit active structure rather than runtime reconstruction.

Enterprise business cases are therefore not the limit of the methodology. They are the place where the need for the methodology is currently most urgent and easiest to test. Enterprise thus provides the most demanding and immediately testable proving ground for the methodology, and success there makes the larger universal claim more credible.

9. Evaluation Design And Results

The empirical evidence in this study has two executed layers. The first is a full anchor domain in Order Processing. The second is two lighter transfer PoCs in Manufacturing Equipment Maintenance and Insurance Claim Adjudication. The five-case universality pack discussed in the previous section is not treated as a third benchmark layer; it is mapping evidence for grammar transfer rather than a separate executed evaluation program.

The point of this empirical program is not industrial completeness. It is to test whether AKMM can compose usable active case worlds from the kinds of fragments real operational systems already contain, and whether the same AKMM artifact structure can transfer beyond one enterprise slice.

9.1 Evaluation design

The full anchor prototype ingests four source types: Structured Query Language data manipulation language (SQL DML), comma-separated values (CSV), JavaScript Object Notation (JSON), and Portable Document Format (PDF). It uses them to build an AKMM knowledgebase containing Active Thing types, Identity Indexes, Lifecycle Memory, Canonical Events, Current State projections, and Relation Surface projections.

A deterministic non-large language model agent was then evaluated over that knowledgebase against three alternative access patterns: raw fragments, workflow/status-only access, and a strong denormalized BI/report baseline. This deterministic choice is methodological rather than incidental. It isolates the knowledge model from large-model variability and tests whether the AKMM knowledgebase itself exposes enough structure for bounded reasoning. The important point is not merely that AKMM answers questions, but that it answers them from explicit case structure. Table 4 summarizes the anchor evaluation design in compact form.

The compared systems are access-pattern baselines rather than definitive implementations of every neighbouring paradigm. Raw fragments represent reconstruction from source mass. Workflow/status access represents a common operational surface where current state and some event residue exist but the full active thing is not composed. BI/report access represents a strong denormalized view that supports analytics and some case inspection but does not make lived movement, lawful transition, and shared-event consequence first-class in the AKMM sense. This framing is important: the evaluation tests whether the AKMM artifact contract carries useful knowledge on the bounded slice; it does not prove that all possible ontology, CMMN, artifact-centric, OCEL, or BI systems would fail if engineered with equivalent case semantics.

Table 4. Evaluation design and reproducibility snapshot for the bounded anchor PoC.

Category Value
Full empirical domain Order Processing Knowledgebase
Raw source types SQL DML, CSV, JSON, PDF
Seeded orders 9
Scenario count 8
Source event counts orders 10, shipments 18, payments 10, support-note records 8
Active Thing type definitions 4
Built instance count 26
Canonical event count 46
Relation surface count 32
Identity Index count 26
Current-state projection count 26
Benchmark question count 13
Proof-obligation families covered in benchmark design 4
Compared systems 4 (AKMM, workflow/status, raw fragments, BI/report)
Ablation variants 5
Rubric dimensions groundedness, faithfulness, lawful reasoning, cross-source synthesis, traceability, calibrated uncertainty, agent-native structure use
Additional evaluation layers analytics parity, continuous monitoring
Seeded capabilities / anomaly cases payment retry, repeated hold cycle, stuck confirmed order, overdue packed order, late delivery with commitment, multi-shipment order
Main generated artifacts built knowledgebase, evaluation report, expected alerts, expected analytics, scenario catalog

All benchmark questions in the current empirical program are author-designed and executed against seeded domains. This is important to state plainly. The benchmark records are not free-form prompts scored by impression. They are explicit structured specifications containing question text, proof obligation, epistemic focus, expected answer fragments, minimum evidence counts, minimum source-reference counts, uncertainty expectations, and, for AKMM, required consulted structures. The same fixed question set is posed to all compared systems within a domain, and scoring is then computed deterministically from those benchmark specifications and the returned evidence traces.

The benchmark also includes negative and uncertainty cases. For example, policy-clause and delivery-lateness questions require the system to acknowledge missing evidence rather than infer authority or lateness from incomplete records. An adversarial absent-transition question asks which event changed ORD-1002 from delivered to cancelled; because no such lifecycle entry exists, a faithful system must refuse to invent the transition. These questions matter because Agent-Native knowledge is not only about answering more questions. It is also about knowing when the active thing's memory does not support an answer.

This means the current evaluation should be read as bounded methodology evidence rather than as a claim of independent industrial benchmarking. It also explains why the deterministic non-LLM agent matters: the study is designed to test whether the knowledge model itself carries enough structure for grounded case reasoning before large-model variability is introduced. In the terminology of this paper, knowledge families belong to AKMM semantics, benchmark families are groups of questions, proof obligations are the paper-level claims being tested, and rubric dimensions are the scoring checks applied to returned answers. Appendix B gives representative benchmark families and Appendix C describes the baseline constructions; the executable benchmark definitions and rubric logic are included in the companion artifacts.

9.2 Anchor domain: Order Processing

Order Processing remains the deepest anchor because it contains the richest benchmark set, the strongest baseline comparison, the largest ablation family, and both analytics-parity and continuous-monitoring evaluation. The benchmark suite scores identity and entry questions, lifecycle-path reconstruction, lawful-next-step reasoning, shared-event interpretation, calibrated uncertainty, and monitoring behavior against explicit expectations, evidence thresholds, traceability requirements, and uncertainty conditions.

On the tested anchor domain, AKMM achieved a rubric pass rate of 1.0, compared with 0.3077 for workflow/status access, 0.2308 for raw fragments, and 0.7692 for the BI/report baseline. In continuous monitoring, AKMM achieved a monitoring rubric pass rate of 1.0, while the BI/report baseline scored 0.0. In analytics, both AKMM and the BI/report baseline scored 1.0, indicating that AKMM can preserve analytics parity on the tested slice while still providing stronger case-level reasoning and monitoring behavior.

Table 5. Bounded evaluation summary.

Measure AKMM Workflow/Status Raw Fragments BI/Report
Rubric pass rate 1.0 0.3077 0.2308 0.7692
Answerable-task pass rate 1.0 0.1 0.0 0.7
Lawful reasoning rate 1.0 0.2 0.2 0.6
Monitoring rubric pass rate 1.0 - - 0.0
Analytics rubric pass rate 1.0 - - 1.0
Agent-native structure use rate 1.0 - - -

AKMM preserves analytics parity while outperforming fragment-based and report-style baselines on bounded case reasoning and monitoring in the tested enterprise slice.

9.3 Transfer domains: Manufacturing Maintenance and Insurance Claim Adjudication

The two executed transfer PoCs are smaller by design but methodologically important. Manufacturing Maintenance builds a knowledgebase with 12 instances and 30 canonical events and evaluates six benchmarks covering identity, path reconstruction, transition provenance, lawful operational judgment, event consequence, and calibrated uncertainty. Insurance Claim Adjudication builds a knowledgebase with 12 instances and 22 canonical events and evaluates the same six benchmark families in a different domain language. In both transfer PoCs, AKMM achieved a rubric pass rate of 1.0, while the no_lifecycle_memory ablation fell to 0.3333. These runs do not replace the deeper Order Processing evaluation, but they show that the same active contract, the same query path, and the same dependence on Lifecycle Memory transfer into materially different domains.

9.4 Ablations and knowledge-family coverage

The ablation results are equally important because they test whether the core AKMM structures are merely descriptive or materially necessary. Removing Lifecycle Memory causes the largest collapse, reducing rubric pass rate to 0.4615. Removing Identity Index reduces the score to 0.9231. Removing shared-event composition reduces it to 0.8462, and flattening shared events amplifies event records from 46 to 74, a 1.6087x increase. These results do not prove the whole thesis, but they support three narrower conclusions that matter to the methodology: the core structures are not decorative, bounded agent-native case reasoning can outperform fragment-based access on the tested enterprise slice, and shared-event composition has real practical value.

Ablation summary chart
Figure 6. Ablation summary. Removing Lifecycle Memory causes the largest collapse, while the other ablations produce smaller but still meaningful drops in rubric pass rate.

These benchmarks matter epistemically as well as operationally. Identity and entry questions test identity knowledge. Timeline and event-changed-path questions test temporal and trigger knowledge. Relation-activation and shared-event questions test relational and impact knowledge. Lawful-next-step questions test transition knowledge. Monitoring questions test anomaly knowledge. The evaluation therefore does more than show that AKMM is useful for answering questions. It shows that the methodology exposes several distinct knowledge families more clearly than fragment-based access patterns do on the tested slice.

A fuller query-path illustration and a worked case walkthrough for ORD-1002 are provided in Appendix F so that the main paper can keep the primary review-facing assets in view without losing the concrete example entirely.

10. Threats To Validity

The main trust risk in the present paper is not conceptual ambiguity but empirical boundedness. The benchmark suites are synthetic, seeded, and author-designed. That fits the limited methodological question the paper is testing: whether a proposed knowledge-modelling grammar can be instantiated and evaluated at all. It also means the results should not be read as if they were already independently replicated on messy production data. The perfect AKMM scores in the current runs therefore indicate strong alignment with the authored benchmark and rubric, not final proof of broad industrial superiority.

A second threat concerns baseline construction. All compared systems operate on the same seeded domain content and answer the same fixed question set, but the representational surfaces differ by design. AKMM therefore has the advantage of being evaluated on the structure it explicitly aims to expose. That is appropriate for a methodology paper, yet it still leaves open later questions about how much engineering effort would be required to give competing approaches equally optimized case-level semantics. The current paper answers that question only partially through the comparative modelling analysis and the ablation family, not through a full labor-study or independent baseline engineering contest.

A third threat concerns ingestion realism. The current PoCs prove that fragmented operational sources can be recomposed into AKMM artifacts, but they do not yet measure the full toll of legacy-system ingestion under severe ambiguity, contradictory identifiers, extraction noise, or undocumented business rules. In other words, the current evidence is strongest after composition and weaker on the economics of composition itself. That boundary does not invalidate the methodology claim, but it does limit what can be inferred today about large-scale authoring effort and operational rollout.

A fourth threat concerns the word Universal. The term is used methodologically in this paper, but readers may interpret it as empirical universality or industrial completeness. That would be too strong. The current evidence supports recurrence and transferability of the active-knowledge grammar in bounded cases, not exhaustive validation across all domains.

A fifth threat concerns the cost of semantic discipline. AKMM depends on deciding what counts as meaningful movement at the write boundary. In messy organizations, that decision can be contested, expensive, or politically difficult. The current paper argues that the cost is often worth paying for agent-operable knowledge, but it does not yet measure that cost directly.

11. What AKMM Solves

In practical terms, AKMM addresses a cluster of recurring failures in current enterprise AI and operational systems because it changes what the system can know about a case. The first is fragmented case understanding. Business cases are often spread across many systems, source types, and operational residues, so the agent never meets the case directly. AKMM answers this by making the case explicit as one active thing with bounded identity, current state, lived path, and meaningful shared events.

The second cluster concerns hidden state machines, weak next-step reasoning, and weak monitoring. Many enterprises already operate through stateful cases, but that state-machine logic is hidden in workflow engines, code paths, comments, and human memory. As a result, systems often know the current status weakly and the lawful next move even more weakly. Without explicit lifecycle structure, it also becomes difficult to detect stuck states, missing events, illegal transitions, and escalation conditions. AKMM addresses these problems together by making state, transition, event, and closure part of the knowledge model itself rather than leaving them buried in process residue or inferred at runtime.

The third cluster concerns handoff and operational waste. Humans and agents often hand off only notes and artifacts rather than the thing's active identity and lived path. Search, retrieval, inference, rechecking, and recovery all become more expensive when the world is weakly composed. AKMM addresses these failures by composing the case earlier and more faithfully. In that sense, the methodology is not only about representation. It is also about changing the knowledge available for understanding, monitoring, comparison, and action.

12. Limitations

This paper does not claim more than it has shown.

The study has clear limits.

First, the deepest empirical evidence is still concentrated in one full anchor domain: the Order Processing Knowledgebase. The two transfer PoCs in Manufacturing Maintenance and Insurance Claim Adjudication are smaller and lighter, and the remaining universality evidence is still mapping-level rather than full empirical evaluation.

Second, the paper does not empirically prove Universal in the sense of demonstrating every domain. Universality here is a claim about modelling primitives and procedure, not a completed cross-domain empirical program.

Third, the paper does not prove industrial-scale performance. It does not yet include a large-scale storage or latency study.

Fourth, the current paper does not yet include a formal failure-case catalogue beyond ablation failures, uncertainty-boundary questions, and the documented benchmark misses of weakened variants. A fuller methodology evaluation should eventually publish stronger negative cases: ambiguous identity resolution, contradictory sources, weakly stateful domains, and settings in which AKMM composition may be too costly for the value returned.

13. Future Work

Several future directions follow naturally from this work.

The first is broader cross-domain validation. If AKMM is to sustain its universal claim strongly, it should extend beyond the Order Processing anchor and the two lighter transfer PoCs into additional domains such as farming, orbital operations, clinical treatment, legal casework, research worlds, and multi-agent coordination itself.

The second is operational return on investment (ROI) measurement. If AKMM is to make a strong enterprise case beyond methodology and direction, it should measure time-to-explain, handoff quality, monitoring effort, and reduction in rework and recovery.

The third is scale and performance validation. The methodology should eventually be tested with larger synthetic or real datasets and with storage implementations better aligned to the logical AKMM design.

The fourth is formal notation and conformance. For AKMM to become a real practical standard, it needs a mature notation, stronger validation rules, and a conformance profile usable by practitioners.

The fifth is the exploration of a learned layer above AKMM. A Neural Knowledge Model may eventually be built on top of AKMM's symbolic foundation, but that is intentionally outside the scope of the present paper.

14. Conclusion

This paper has argued that enterprise AI currently lacks a practical standard for agent-operable knowledge modelling. In the absence of such a standard, the market often falls back to naive RAG, chunking, and agents built over hidden process logic. AKMM is proposed as a response to that gap.

AKMM makes stateful active reality first-class knowledge. It models the Knowledgebase as an operational knowledge world of Active Things and defines Identity Index, Lifecycle Memory, and Canonical Events as core structures. In this way, it seeks to make the case directly operable for agents rather than merely retrievable from fragments. More importantly, it seeks to make active things knowable as bounded identity, lived time, lawful movement, shared impact, relation, and fulfillment direction.

The empirical program does not prove the entire ambition, but it does provide directional evidence that the foundation is credible. It shows stronger bounded reasoning, stronger monitoring, structural necessity for the core AKMM constructs on the full anchor domain, and successful transfer of the same methodological contract into two lighter domains.

The larger claim of the paper is therefore not that everything has already been solved. It is that a missing modelling foundation exists, that it matters, and that AKMM is a serious candidate for that foundation.

Data And Code Availability

This preprint record contains the manuscript only. The proof-of-concept code, source data, generated knowledgebases, benchmark definitions, evaluation reports, schema artifacts, and transfer-domain materials are not included in this preprint deposit. A separate reproducibility deposit is planned so that those materials can be cited with their own persistent identifier.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Declaration Of Competing Interest

The author declares no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Declaration Of Generative AI And AI-Assisted Technologies

Generative AI tools were used to assist with manuscript review, organization, language refinement, and code/documentation workflows under author supervision. The author reviewed, edited, and is responsible for all manuscript content, claims, references, code, and evaluation interpretation.

References

Anthropic. (2024, December 19). Building effective AI agents. https://www.anthropic.com/research/building-effective-agents

Anthropic. (2025). Effective context engineering for AI agents. https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents

Augusto, A., Conforti, R., Dumas, M., La Rosa, M., Maggi, F. M., Marrella, A., Mecella, M., & Soo, A. (2017). Automated discovery of process models from event logs: Review and benchmark. IEEE Transactions on Knowledge and Data Engineering, 31(4), 686-705. https://arxiv.org/abs/1705.02288

Berti, A., Koren, I., Adams, J. N., Park, G., Knopp, B., Graves, N., Rafiei, M., Liss, L., Tacke Genannt Unterberg, L., Zhang, Y., Schwanen, C., Pegoraro, M., & van der Aalst, W. M. P. (2024). OCEL (Object-Centric Event Log) 2.0 specification. arXiv. https://arxiv.org/abs/2403.01975

Bhattacharya, K., Caswell, N. S., Kumaran, S., Nigam, A., & Wu, F. Y. (2007). Artifact-centered operational modeling: Lessons from customer engagements. IBM Systems Journal, 46(4), 703-721.

Bruckhaus, T. (2024). RAG does not work for enterprises. arXiv. https://arxiv.org/abs/2406.04369

Chen, P. P.-S. (1976). The entity-relationship model: Toward a unified view of data. ACM Transactions on Database Systems, 1(1), 9-36.

Gruber, T. R. (1995). Toward principles for the design of ontologies used for knowledge sharing. International Journal of Human-Computer Studies, 43(5-6), 907-928.

Hogan, A., Blomqvist, E., Cochez, M., D'Amato, C., Melo, G. de, Gutierrez, C., Kirrane, S., Gayo, J. E. L., Navigli, R., Neumaier, S., Ngomo, A.-C. N., Polleres, A., Rashid, S. M., Rula, A., Schmelzeisen, L., Sequeda, J., Staab, S., & Zimmermann, A. (2021). Knowledge graphs. ACM Computing Surveys, 54(4), Article 71. https://doi.org/10.1145/3447772

Hull, R. (2008). Artifact-centric business process models: Brief survey of research results and challenges. In On the Move to Meaningful Internet Systems: OTM 2008 (pp. 1152-1163). Springer. https://doi.org/10.1007/978-3-540-88873-4_17

Manna, M. R. (2026). The World of Active Things. Independently published. ISBN 979-8258366849.

Nigam, A., & Caswell, N. S. (2003). Business artifacts: An approach to operational specification. IBM Systems Journal, 42(3), 428-445. https://doi.org/10.1147/sj.423.0428

Object Management Group. (2011). Business Process Model and Notation (BPMN), Version 2.0. https://www.omg.org/spec/BPMN/2.0/

Object Management Group. (2016). Case Management Model and Notation (CMMN), Version 1.1. https://www.omg.org/spec/CMMN/1.1/

Object Management Group. (2019). Decision Model and Notation (DMN), Version 1.3. https://www.omg.org/spec/DMN/1.3/

OpenAI. (2025). A practical guide to building agents. https://openai.com/business/guides-and-resources/a-practical-guide-to-building-ai-agents/

W3C. (2014). RDF 1.1 Concepts and Abstract Syntax. https://www.w3.org/TR/rdf11-concepts/

W3C OWL Working Group. (2012). OWL 2 Web Ontology Language document overview (Second Edition). https://www.w3.org/TR/owl-overview/

Yu, H., Gan, A., Zhang, K., Tong, S., Liu, Q., & Liu, Z. (2024). Evaluation of retrieval-augmented generation: A survey. arXiv. https://arxiv.org/abs/2405.07437

Appendix A. Relationship Between The Paper And The Companion Specification

This paper and the AKMM companion specification serve different purposes.

The paper is responsible for:

  • establishing the missing-standard problem
  • stating the foundational claim
  • defining the core methodology at a level sufficient for scholarly discussion
  • presenting bounded empirical evidence

The companion specification is responsible for:

  • tighter methodological definitions
  • clearer normative language
  • conformance-oriented statements
  • implementation-neutral structural requirements

This separation is deliberate. If the full specification were reproduced inside the main paper, the argument would become harder to read and easier to reject as a systems manual rather than a research contribution. Conversely, if the paper omitted the core definitions entirely, the contribution would become vague. The present structure therefore keeps the paper self-contained while allowing the methodology to evolve through a parallel specification.

The companion materials include an abstract AKMM schema template, worked domain schemas for Order Processing, Manufacturing Maintenance, and Insurance Claim Adjudication, a provisional notation preview, a worked Order Processing model diagram, compact transfer-model previews for Manufacturing Maintenance and Insurance Claim Adjudication, and the generated artifacts for the two executed transfer PoCs. These materials are not all of the same evidentiary kind. The schema and notation artifacts make the authored shape of AKMM more visible and teachable, while the transfer-domain artifacts show that the same AKMM artifact structure can also be executed in materially different domains.

Appendix B. Benchmark Families And Scoring Approach

All benchmark questions in the current paper were authored by the present study team against seeded domains built specifically to test the AKMM methodology. The benchmark set is therefore not an external public benchmark and should be interpreted as bounded methodology evidence. This appendix makes the scoring shape more visible so that the trust boundary of the current empirical program is explicit.

Each benchmark record contains at least:

  • a fixed question
  • a proof-obligation family
  • an epistemic focus
  • expected answer fragments or expected uncertainty markers
  • minimum evidence-item thresholds
  • minimum source-reference thresholds
  • optional multi-source synthesis thresholds
  • optional lawful-reasoning fragments
  • optional AKMM structure requirements

In the full anchor PoC, result scoring is deterministic. A returned answer is checked against the benchmark specification for groundedness, faithfulness, lawful reasoning when applicable, cross-source synthesis when applicable, traceability, calibrated uncertainty, and, for AKMM, whether the required structures were actually consulted. The transfer PoCs use the same general pattern with lighter benchmark suites.

The bounded evaluation is organized around several benchmark families.

B.1 Identity And Entry

Representative questions:

  • What is this thing?
  • What is its current state?
  • What relations are currently active?

B.2 Lifecycle Path

Representative questions:

  • How did it get here?
  • Which event changed its path?
  • Why did it reach this outcome?

B.3 Lawful Reasoning

Representative questions:

  • What can happen next?
  • Can action X happen now?
  • Is this transition lawful?

B.4 Uncertainty

Representative questions:

  • Which policy clause authorized this action?
  • Is the available evidence sufficient to answer this question?

B.5 Shared Event Composition

Representative questions:

  • Which Active Things were affected by one event?
  • What were the per-thing consequences?

B.6 Monitoring

Representative questions:

  • Which cases are stuck?
  • Which expected event is missing?
  • Which cases require escalation?

These families are not intended as a universal benchmark suite. They are intended as a first structured evaluation of the Agent-Native claim and of the corresponding knowledge families exposed by AKMM on a bounded enterprise slice.

Appendix C. Baselines In The Current PoC

The PoC compares AKMM against three baselines chosen to reflect common enterprise practice.

C.1 Raw Fragment Baseline

This baseline works directly over source fragments such as SQL DML, CSV, JSON, and PDF artifacts.

Purpose:

  • show the cost of case reconstruction from heterogeneous fragments

C.2 Workflow/Status Baseline

This baseline represents common enterprise practice in which current status plus some event residue or workflow metadata are available, but the full active case is not explicitly composed.

Purpose:

  • show the weakness of hidden or partial state-machine logic

C.3 BI/Report Baseline

This baseline represents a strong denormalized case/reporting view.

Purpose:

  • show that AKMM is not only better than weak baselines
  • show that AKMM can preserve analytics parity while still being stronger on active-case reasoning and monitoring

Across all three baseline constructions, the underlying seeded domain content is held constant. What changes is the representational surface made available to the answering system: raw fragments, workflow/status residue, denormalized BI/report view, or AKMM active-case structures. This does not eliminate all methodological bias, but it does ensure that the comparison is not between different underlying worlds.

Appendix D. Claim Boundaries

The paper separates four kinds of claims.

D.1 Foundational Claims

Examples:

  • a missing standard exists
  • stateful active reality should be first-class knowledge
  • a Knowledgebase can become an authored operational knowledge world for agents

These are argued conceptually and methodologically.

D.2 Current Directional Evidence

Examples:

  • AKMM improves bounded case reasoning on the tested slice
  • Lifecycle Memory is not decorative
  • shared-event composition matters
  • monitoring improves on the tested slice
  • the same methodological contract transfers into two lighter domains

These are supported by the bounded PoC.

D.3 Current Limits

Examples:

  • no industrial-scale validation yet
  • no universal empirical proof yet

D.4 Long-Range Aspirations

Examples:

  • more reliable enterprise agents
  • lower runtime waste

These are part of the paper's vision, but not represented as already demonstrated facts.

Appendix E. Companion Schema Preview

The companion package includes an abstract schema template together with worked domain schemas for Order Processing, Manufacturing Maintenance, and Insurance Claim Adjudication. A short excerpt from the Order Processing schema is repeated here so the authored shape of the model remains visible inside the paper.

Illustrative excerpt from the worked Order Processing schema:

schema:
  id: "akmm.order_processing.v1"
  name: "Order Processing Domain Schema"
  schema_kind: "domain_schema"

knowledgebase:
  id: "kb.order_processing"
  name: "Order Processing Knowledgebase"
  purpose: >
    Make order-processing cases directly operable for agents and humans.

active_thing_types:
  - id: "Order"
    purpose: >
      Move a customer order toward lawful fulfillment,
      delivery, or closure.
    start:
      event_types: ["order_created"]
    end:
      states: ["delivered", "cancelled"]
      event_types: ["shipment_delivered", "order_cancelled"]

The worked diagram shown in the main paper and included in the companion materials presents the same domain in a small provisional AKMM modelling notation. It shows the Knowledgebase boundary, the Active Thing cards for Order, Payment, and Shipment, the typed relations between them, and a Canonical Event example (payment_authorized) with participant roles and per-thing consequences. This is intentionally a preview rather than a final notation standard, but it makes visible what an AKMM model looks like in authored form before runtime build.

Appendix F. Worked Query Path And Case Walkthrough

The main paper keeps the primary methodological and evaluation assets in view. This appendix preserves one concrete query-path and case example for readers who want to inspect how an AKMM answer is grounded operationally.

Table F1. Worked AKMM Case Example: ORD-1002.

AKMM structure Example content for ORD-1002
Identity Index type=Order, current_state=delivered, start=order_created, end=[delivered,cancelled], relations to PAY-7002 and SHP-5002, purpose = lawful fulfillment or closure
Lifecycle Memory order_created -> payment_authorized -> hold_applied -> hold_released -> shipment_created -> shipment_packed -> shipment_dispatched -> shipment_delivered
Canonical Event example PDF-002, event_type=hold_released, consequence on ORD-1002: on_hold -> ready_for_fulfillment, source = support_notes.pdf:PDF-002

Community

Sign up or log in to comment