mykungfu.ai — Architecture
The Lucid Systems Implementation Stack

A knowledge-driven AI execution system.

myKungFu is a knowledge-driven AI execution system that routes tasks through reusable capabilities to produce structured artifacts and continuously expand its capability base.

At its centre is a single question: Do we already know how to solve this?

If yes — the Router loads canonical knowledge assets and assembles an Execution Plan. If partially — it combines existing assets with model reasoning. If no — it falls through to model default reasoning, and the result may become a new canonical asset.

This shifts the system from prompt experimentation to a progressively improving cognitive system — one that learns from its own execution.

rendering…
System overview — Knowledge · Control · Execution · Cognitive layers
Knowledge Layer
Canonical Assets

The Knowledge Layer is the foundation the Router draws from. It stores reusable canonical assets in Git — versioned, reviewable, and portable across models. It is what makes the system knowledge-first: tasks are matched against these assets before any model is invoked.

Skills

Reusable capabilities or workflows that encode how to approach a known class of task.

Workflows

Multi-step reasoning procedures defining how a complex task is decomposed and executed.

Context Packs

Structured project or domain knowledge — the situational awareness a task requires.

Prompt Assets

Standardised interaction templates that ensure consistent, high-quality model instructions.

Properties — versioned · reviewable · portable across models

Control Layer
The Router

The Router is the deterministic decision engine and instruction compiler at the core of every task execution. It evaluates incoming tasks before any model is invoked — this is the knowledge-first principle in action.

The Router answers a single question — Do we already know how to solve this? — and makes four decisions: which skills apply, which workflows to load, which context packs provide the right situational knowledge, and whether existing assets already solve the task.

Once the Router has selected the relevant assets, it compiles them into an Execution Plan — a structured, named artifact passed to the Execution Layer. The Router does not forward requests; it transforms them.

The term deterministic means that routing decisions are transparent and explainable — not hardcoded or rule-based only. The Router's selections can be traced, reviewed, and corrected. This is not probabilistic inference; it is accountable decision-making.

rendering…
Router Decision Model — three routing paths from a single question
Execution Layer
Models and Tools

The Execution Layer is the set of models and tools that perform the actual task. It receives its instructions from the Router's compiled Execution Plan — not from raw user input.

Current execution targets: Claude, OpenAI models, Gemini, automated workflows, and external APIs. The design is model-agnostic — skills and assets are portable across providers. Knowledge structures do not depend on any single model.

Execution Layer outputs are structured Artifacts — not raw model responses. For complex or strategic tasks requiring multi-agent reasoning, the Cognitive Layer sits above this execution layer.

Claude
OpenAI models
Gemini
Automated workflows
External APIs
Cognitive Layer
Multi-Agent Reasoning
Advanced

Single-model reasoning is insufficient for complex or strategic tasks. The Cognitive Layer provides structured multi-agent reasoning as a higher-order capability — not an experimental feature, but a production-grade option for tasks that require it.

The reasoning protocol is Divergent-Convergent Reasoning (DCR), defined canonically at thelucidmind.ai. The Router invokes multi-agent reasoning when a task is classified as complex or strategic — selecting a DCR workflow from the Knowledge Layer.

DCR agent roles bring different viewpoints before convergence: a technologist evaluates feasibility, a strategist evaluates direction, a skeptic identifies failure modes, and an economist evaluates cost and trade-offs. Perspectives diverge before converging on a reasoned output.

The Resonance Architecture provides optional convergence monitoring for cognitive-layer reasoning cycles — detecting when agent perspectives have sufficiently converged to produce a reliable output.

Runtime Task Flow
From Task to Artifact

This process applies on every task execution path — whether the Router finds a full match, a partial match, or no prior knowledge. Steps 1–6 are consistent. Step 7 is optional.

01
Submit

User submits a task — intent expressed as input, triggering the routing process.

02
Analyse

The Router analyses the task against the Knowledge Layer — applying the core question.

03
Select

Relevant knowledge assets are identified: applicable Skills, Workflows, and Context Packs.

04
Assemble

Selected assets are compiled into an Execution Plan — the named artifact passed to execution.

05
Execute

The Execution Layer performs the task using the plan — models and tools acting on compiled instructions.

06
Produce

Structured artifacts are produced — persistent, typed, and stored in Memory. Execution ensures consistent and reproducible results.

07
Formalise
optional

If the execution produced a valuable new reasoning pattern, it is formalised as a Skill or Workflow and stored back in the Knowledge Layer — closing the learning loop.(inferred from system design; not always triggered)

rendering…
Runtime task flow — steps 1–6 consistent; step 7 optional capability capture
System Primitives
Core Vocabulary

Six primitives form the complete vocabulary of the system. Everything else in the system is composed from these primitives.

Task

User intent expressed as an input that triggers routing and execution.

Knowledge Assets

The canonical, versioned building blocks of the Knowledge Layer: Skills, Workflows, Context Packs, and Prompt Assets.

Execution Plan

The compiled structured plan produced by the Router, defining which assets to apply, which models to invoke, and what artifact outputs to expect.

Execution Layer

The models and tools that perform the actual task — Claude, OpenAI models, Gemini, automated workflows, and external APIs.

Artifacts

Structured persistent outputs produced by execution. The knowledge currency of the system. See Artifacts specification below.

Memory

Persistent storage with two backends: Notion (navigable knowledge) and Git (canonical versioned assets). Memory is cross-cutting infrastructure, not a fifth architecture layer.

rendering…
System primitives — relationships and data flow
Artifacts
The Knowledge Currency

Artifacts are structured outputs produced by workflows and skills — persistent reasoning results that outlast the session that produced them. They are the knowledge currency of the system: not raw model outputs, but structured, typed documents that accumulate value across executions.

Artifacts serve three purposes: they enable reproducible task execution, they accumulate knowledge in the Memory layer, and they provide inputs for future workflows. Artifacts are produced in step 6 of the runtime flow — and optionally trigger step 7, capability capture, when a valuable new reasoning pattern is identified.

research.mdarchitecture.mddiagram.mdcritique.mdsummary.mdexecution_plan.md
01
Enable reproducible execution

Each artifact documents reasoning state — tasks can be resumed, reviewed, or replicated from the artifact alone.

02
Accumulate knowledge

Artifacts are stored in Memory (Notion for navigation, Git for canonical versioning) where they inform future routing decisions.

03
Provide inputs for future workflows

Downstream workflows receive artifacts as structured inputs — not unstructured text, but typed reasoning results.

Design Principles
Five Constraints

These five principles are not aspirations — they are design constraints that govern every architectural decision above.

Knowledge First

Every task is matched against existing knowledge assets before invoking a model. If a reusable skill or workflow applies, it takes precedence over raw model inference.

Canonical Storage

Skills, workflows, context packs, and prompt assets are stored as canonical, versioned artifacts in Git — authoritative, auditable, and reproducible.

Deterministic Control

Routing decisions are transparent and explainable. The Router selects assets and assembles an Execution Plan through logical criteria, not probabilistic inference.

Model Agnostic

Skills and assets are portable across model providers. Claude, OpenAI, and Gemini are interchangeable at the Execution Layer — knowledge structures do not depend on any single model.

Progressive Learning

Valuable reasoning patterns discovered during execution can be converted into reusable skills and workflows, continuously expanding the system's capability base.

Vision
North Star

myKungFu evolves toward a knowledge-driven AI operating system — one where every capability is formalised as a skill, execution is governed by the Router, models are interchangeable at the execution layer, and reasoning artifacts accumulate as persistent knowledge.

The trajectory is from prompting a model to operating a structured cognitive system — a shift that changes not just what the system can do, but how reliably and reproducibly it does it.

Note — 'evolves toward' is the precise framing: this is a trajectory, not a current state claim.

rendering…
Capability growth — from first encounter to knowledge-driven execution
Deeper Documentation
In Development
Router Architecture & Enforcement Concept
In development

Full specification of routing logic, enforcement mechanisms, and the Router's role in knowledge governance.

Skill Specification
In development

Canonical format for defining, versioning, and publishing reusable skills in the Knowledge Layer.

myKungFu Capability Map
In development

Live inventory of current skills, workflows, context packs, and prompt assets — with coverage and quality indicators.

Implementation Roadmap
In development

Phased development plan: from current architecture toward the knowledge-driven AI operating system north star.