Components
Editor
Select a component to edit.
LYRN is a local‑first cognitive operating system built on snapshot, delta, verbatim memory and topic indexes.
LYRN maintains state, memory and a self‑model. Unlike traditional LLM wrappers, it does not rely solely on transient context windows.
It is deterministic, local‑first and inspectable, ensuring that every decision made by the agent can be traced back to a specific memory delta or rule set.
AI today is often stateless, cloud‑bound, hallucinatory and brittle. Deploying agents into critical infrastructure is risky when their internal logic is opaque and their memory is fleeting.
Automated logistics and factory floor coordination with persistent state tracking.
Customer service interfaces that remember returning users and context history.
Private, air‑gapped monitoring agents that detect anomalies without cloud leaks.
High‑latency, low‑bandwidth habitat controllers for autonomous operation.
The core of LYRN is the continuous loop of Snapshot → Delta → Reflection → Update → Verbatim Memory.
Inputs are processed not just as text, but as events that alter the system’s internal state database. This allows the system to remember why it made a decision days or weeks later.
[ Sensors / Inputs ]
↓
[ Snapshot Builder ]
↓
[ Delta Manager ]
↓
[ LYRN Core Loop ]
↓
[ Actions / Decisions ]
↓
[ Verbatim Memory + Topic Index Updates ]
Embodiment is LYRN equipped with sensors, actuators and a body schema. The system builds a persistent internal model of its own body and environment.
Because there is no cloud dependency, embodied agents retain continuity even across reboots or periods of network isolation.
[ Sensors / Proprioception / Vision ]
↓
[ Snapshot Builder ]
↓
[ Delta Manager ]
↓
[ LYRN Core Cognitive Loop ]
↓
[ Embodied Action Layer ]
↓
[ Verbatim Memory + Body Schema Updates ]
A LYRN‑embodied agent receives the task to pick up a box. The Snapshot builds an instant picture of position, arm extension, object location and weight estimate.
As it moves, Delta events are fired: grip slippage, centre‑of‑mass shift or a new obstacle detected.
LYRN updates internal topics instantly: “object,” “grip,” “surface” and “movement vector.” Reflection evaluates movement safety and optimization in real‑time.
Once the agent completes the task, it logs the attempt in verbatim memory. The next attempt starts from these improved priors — no retraining pipeline needed.
Runs on standard embedded PCs inside robots, drones, forklifts or humanoids. Works with standard sensor packs including LIDAR, depth cams, encoders, accelerometers and microphones. Embodied agents inherit bounded behaviour guarantees natively.
Every user input and AI response is captured as a plain text file, timestamped for precise chronological tracking. These logs are minimal by design.
For scalability, chat entries are grouped into blocks (typically 50 entries). Each block has an SQL‑based index file storing summaries and metadata.
Memory retrieval is initiated by a user query or an LLM trigger:
Modern AI systems, especially those built on large language models (LLMs), often operate in a purely reactive mode. They only reason when prompted by external input, which limits their ability to develop persistent goals or generate insight over time.
The Reflection Cycle in LYRN addresses this by introducing a background process that enables the system to self‑evaluate, consolidate memory and adapt its internal structures without user intervention.
In most LLM‑based systems, memory is fragmented. Without a mechanism for internal review, symbolic memory becomes bloated. The Reflection Cycle solves critical problems:
The Reflection Cycle operates on a timer or can be manually triggered. Its workflow includes:
The Reflection Cycle is deeply integrated with LYRN’s modular memory system:
In memory‑based AI systems, context is everything. Traditional memory implementations rely on long token windows or rigid vector embeddings. This document outlines a lightweight, scalable approach built entirely from plain text: Topic Indexing.
The core concept is simple: each time a keyword is searched, the system pulls up a topic index or creates one. This becomes a central node containing summaries and insights, allowing the LLM to reason, recall and relate over time.
Topic indexes are automatically created from keyword searches. These indexes are updated with all chat entries that include the keyword, along with summaries.
Over time, each topic index grows richer, containing a chronological thread of thoughts. This is not static memory; it is lived memory.
The topic index framework allows the system to evolve its behaviour through a structured delta mechanism.
insight_id: 52
timestamp: 2025-06-18T03:20
summary: Recurring dream patterns align with recent emotional tone.
source_topic: dreams
Experience the local-first prompt construction engine.
Select a component to edit.
The live RWI on the dashboard tracks these core components inside the master prompt:
###RWI_INSTRUCTIONS_START### This is the Relational Web Index (RWI). It provides the LLM with a list of active components and how to interpret them. Each line represents an active component and its associated brackets and purpose. **You must read through the following components listed in the RWI before each response.** - system_instructions: [###SYSTEM_INSTRUCTIONS.START###]...[###SYSTEM_INSTRUCTIONS.END###] Core system instructions. - system_rules: [###SYSTEM_RULES.START###]...[###SYSTEM_RULES.END###] Hard constraints on behavior. - ai_preferences: [###AI_PREFERENCES_START###][###AI_PREFERENCES_END###]: AI-specific preferences. - personality: [###PERSONALITY.START###]...[###PERSONALITY.END###] AI identity traits and behavioral biases. - jobs_instructions: [###JI_START###][###JI_END###]: This block contains a list of job instructions.
LYRN acts as the “brain” for industrial agents (forklifts, AGVs, robots). Unlike traditional centralized cloud systems, LYRN uses coordinate‑based navigation, grid layouts and QR anchors to build a persistent, local mental model of the environment. It stays fully local and stateful, allowing machines to make autonomous decisions even when connectivity is lost.
[ Warehouse Sensors / QR Codes ]
↓
[ Snapshot Builder ]
↓
[ Delta Manager ]
↓
[ LYRN Core Loop (KV Cache + Topic Index) ]
↓
[ Navigation Commands ] → [ Verbatim Logs ]
A forklift boots near Aisle 4, scans a QR anchor and loads the local grid snapshot. The central command issues a task: “Retrieve pallet 4B‑17.”
LYRN plans the path but detects a previously logged obstruction in Aisle 5 via its Verbatim Memory. It immediately calculates a re‑route through Aisle 6.
The agent moves safely, avoiding the hazard, and logs the completed task and route into the Delta Manager for future reflection cycles.
LYRN can be embedded directly into forklift controllers or mounted as a separate cognitive appliance. It can run alongside existing WMS/ERP systems, serving as the localized cognition layer that bridges the gap between database logic and physical reality.
Most kiosks are stateless, generic interfaces that forget you the moment you walk away. LYRN changes this by allowing a kiosk to maintain an internal memory of returning visitors (using tokens like QR codes, device IDs or configured biometrics). It creates an experience of continuity and recognition while keeping all data local.
[ User Token / Input ]
↓
[ Identity Resolution (Local) ]
↓
[ LYRN Query: "Have we met?" ]
↓
[ Retrieve Verbatim History ] → [ Generate Response ]
You approach a kiosk at a modern art museum for the second time. The kiosk scans your ticket QR code.
Recognizing your token, LYRN accesses the Topic Index for your previous session.
It offers directions and remembers your previous accessibility setting (large text), creating a seamless continuation of your visit.
LYRN can be implemented on standard kiosk PCs. It works offline; synchronization with other kiosks is optional if the environment allows it, enabling a “hive mind” within the building without leaking data to the internet.
Many security systems are either “dumb” (simple rule‑based alerts) or opaque (cloud‑based ML black boxes). LYRN offers a third way: local, inspectable intelligence with long‑term memory. It sits in the secure zone, monitoring patterns over weeks or months without ever sending data out.
[ Badge/Sensor Input ]
↓
[ Delta Trigger ] → [ Anomaly Check ]
↓
[ LYRN Context Lookback ]
(Compare vs. User History)
↓
[ Risk Score Calculation ] → [ Alert/Log ]
Badge ID 742 attempts access to a restricted server room at 03:00 AM.
LYRN recalls previous attempts and notes a complete lack of history for this user in this zone at this time.
It flags the event with a high severity score and generates a short incident narrative. Later, during a security review, the officer uses LYRN’s Verbatim Memory to replay the sequence of events in clear text, rather than parsing raw database logs.
LYRN runs on secure servers behind firewalls. It can integrate with existing access control systems (ACS) via local APIs or by consuming log streams, adding a layer of intelligence without replacing certified infrastructure.
Space operations suffer from extreme latency, blackouts and harsh conditions. You cannot depend on cloud computing or constant guidance from Earth. LYRN provides a local cognition engine that runs on satellites, probes and habitats, allowing them to reason about their state and mission objectives in real‑time.
[ Telemetry Stream ]
↓
[ Anomaly Detection ] → [ Delta Event ]
↓
[ LYRN Core: "Is this critical?" ]
↓
[ Autonomous Decision ] → [ Action ]
↓
[ Generate Summary for Earth Uplink ]
A probe enters a pre‑known communications blackout behind a planet.
During the blackout, a power dip occurs. LYRN assesses the situation against mission parameters, shuts down non‑critical scientific instruments to conserve energy and adjusts orientation.
It logs the decision logic in Verbatim Memory. When contact returns, instead of sending raw chaotic data, LYRN produces a concise narrative summary of the incident for mission control.
LYRN is designed to run on constrained onboard compute. It works with intermittent uplinks by prioritizing high‑value “narrative” data and state summaries over raw telemetry dumps, optimizing bandwidth usage.