The Knowledge Layer is the foundation the Router draws from. It stores reusable canonical assets in Git — versioned, reviewable, and portable across models. It is what makes the system knowledge-first: tasks are matched against these assets before any model is invoked.
Reusable capabilities or workflows that encode how to approach a known class of task.
Multi-step reasoning procedures defining how a complex task is decomposed and executed.
Structured project or domain knowledge — the situational awareness a task requires.
Standardised interaction templates that ensure consistent, high-quality model instructions.
Properties — versioned · reviewable · portable across models
The Router is the deterministic decision engine and instruction compiler at the core of every task execution. It evaluates incoming tasks before any model is invoked — this is the knowledge-first principle in action.
The Router answers a single question — Do we already know how to solve this? — and makes four decisions: which skills apply, which workflows to load, which context packs provide the right situational knowledge, and whether existing assets already solve the task.
Once the Router has selected the relevant assets, it compiles them into an Execution Plan — a structured, named artifact passed to the Execution Layer. The Router does not forward requests; it transforms them.
The term deterministic means that routing decisions are transparent and explainable — not hardcoded or rule-based only. The Router's selections can be traced, reviewed, and corrected. This is not probabilistic inference; it is accountable decision-making.
The Execution Layer is the set of models and tools that perform the actual task. It receives its instructions from the Router's compiled Execution Plan — not from raw user input.
Current execution targets: Claude, OpenAI models, Gemini, automated workflows, and external APIs. The design is model-agnostic — skills and assets are portable across providers. Knowledge structures do not depend on any single model.
Execution Layer outputs are structured Artifacts — not raw model responses. For complex or strategic tasks requiring multi-agent reasoning, the Cognitive Layer sits above this execution layer.
Single-model reasoning is insufficient for complex or strategic tasks. The Cognitive Layer provides structured multi-agent reasoning as a higher-order capability — not an experimental feature, but a production-grade option for tasks that require it.
The reasoning protocol is Divergent-Convergent Reasoning (DCR), defined canonically at thelucidmind.ai. The Router invokes multi-agent reasoning when a task is classified as complex or strategic — selecting a DCR workflow from the Knowledge Layer.
DCR agent roles bring different viewpoints before convergence: a technologist evaluates feasibility, a strategist evaluates direction, a skeptic identifies failure modes, and an economist evaluates cost and trade-offs. Perspectives diverge before converging on a reasoned output.
The Resonance Architecture provides optional convergence monitoring for cognitive-layer reasoning cycles — detecting when agent perspectives have sufficiently converged to produce a reliable output.
This process applies on every task execution path — whether the Router finds a full match, a partial match, or no prior knowledge. Steps 1–6 are consistent. Step 7 is optional.
User submits a task — intent expressed as input, triggering the routing process.
The Router analyses the task against the Knowledge Layer — applying the core question.
Relevant knowledge assets are identified: applicable Skills, Workflows, and Context Packs.
Selected assets are compiled into an Execution Plan — the named artifact passed to execution.
The Execution Layer performs the task using the plan — models and tools acting on compiled instructions.
Structured artifacts are produced — persistent, typed, and stored in Memory. Execution ensures consistent and reproducible results.
If the execution produced a valuable new reasoning pattern, it is formalised as a Skill or Workflow and stored back in the Knowledge Layer — closing the learning loop.(inferred from system design; not always triggered)
Six primitives form the complete vocabulary of the system. Everything else in the system is composed from these primitives.
User intent expressed as an input that triggers routing and execution.
The canonical, versioned building blocks of the Knowledge Layer: Skills, Workflows, Context Packs, and Prompt Assets.
The compiled structured plan produced by the Router, defining which assets to apply, which models to invoke, and what artifact outputs to expect.
The models and tools that perform the actual task — Claude, OpenAI models, Gemini, automated workflows, and external APIs.
Structured persistent outputs produced by execution. The knowledge currency of the system. See Artifacts specification below.
Persistent storage with two backends: Notion (navigable knowledge) and Git (canonical versioned assets). Memory is cross-cutting infrastructure, not a fifth architecture layer.
Artifacts are structured outputs produced by workflows and skills — persistent reasoning results that outlast the session that produced them. They are the knowledge currency of the system: not raw model outputs, but structured, typed documents that accumulate value across executions.
Artifacts serve three purposes: they enable reproducible task execution, they accumulate knowledge in the Memory layer, and they provide inputs for future workflows. Artifacts are produced in step 6 of the runtime flow — and optionally trigger step 7, capability capture, when a valuable new reasoning pattern is identified.
Each artifact documents reasoning state — tasks can be resumed, reviewed, or replicated from the artifact alone.
Artifacts are stored in Memory (Notion for navigation, Git for canonical versioning) where they inform future routing decisions.
Downstream workflows receive artifacts as structured inputs — not unstructured text, but typed reasoning results.
These five principles are not aspirations — they are design constraints that govern every architectural decision above.
Knowledge First
Every task is matched against existing knowledge assets before invoking a model. If a reusable skill or workflow applies, it takes precedence over raw model inference.
Canonical Storage
Skills, workflows, context packs, and prompt assets are stored as canonical, versioned artifacts in Git — authoritative, auditable, and reproducible.
Deterministic Control
Routing decisions are transparent and explainable. The Router selects assets and assembles an Execution Plan through logical criteria, not probabilistic inference.
Model Agnostic
Skills and assets are portable across model providers. Claude, OpenAI, and Gemini are interchangeable at the Execution Layer — knowledge structures do not depend on any single model.
Progressive Learning
Valuable reasoning patterns discovered during execution can be converted into reusable skills and workflows, continuously expanding the system's capability base.
myKungFu evolves toward a knowledge-driven AI operating system — one where every capability is formalised as a skill, execution is governed by the Router, models are interchangeable at the execution layer, and reasoning artifacts accumulate as persistent knowledge.
The trajectory is from prompting a model to operating a structured cognitive system — a shift that changes not just what the system can do, but how reliably and reproducibly it does it.
Note — 'evolves toward' is the precise framing: this is a trajectory, not a current state claim.
Full specification of routing logic, enforcement mechanisms, and the Router's role in knowledge governance.
Canonical format for defining, versioning, and publishing reusable skills in the Knowledge Layer.
Live inventory of current skills, workflows, context packs, and prompt assets — with coverage and quality indicators.
Phased development plan: from current architecture toward the knowledge-driven AI operating system north star.