Brainlet

How it works

From raw code to project intelligence

Brainlet turns a codebase into a computed understanding layer that any LLM can query.

  1. 1 Step

    Parse

    Code graph

  2. 2 Step

    Analyze

    Relationships

  3. 3 Step

    Train

    Project signals

  4. 4 Step

    Build

    Intelligence

  5. 5 Step

    Query

    8 tools

  6. 6 Step

    Understand

    LLM-ready

Parse

Brainlet scans the codebase, parses files, and extracts definitions, imports, calls, configuration, and relationships across 25 programming languages.

Analyze

The engine normalizes those signals into a project graph: modules, dependencies, boundaries, conventions, data flow, and impact paths.

Train

Specialized analysis models learn project-specific structural signals and representations locally. This is not LLM fine-tuning; it is project intelligence built from the codebase itself.

Build

Brainlet combines the graph, embeddings, and learned representations into an intelligence layer that knows how the project is structured and how changes propagate.

Query

Any LLM can query that intelligence through 8 specialized tools. Each tool answers a different class of project question from a different angle.

Understand

The LLM receives computed project knowledge instead of raw file chunks, so it can answer, review, or generate code with the project context already prepared.

What makes it different

Retrieval asks the LLM to do the hard work. Brainlet does that work before the prompt is ever built.

Traditional approach

  • -Parse code into chunks
  • -Create generic embeddings and indexes
  • -Search for relevant files
  • -Paste files into LLM context
  • -Ask the model to infer the system
  • -Accuracy scales with model cost
  • -Often depends on cloud infrastructure

Brainlet approach

  • -Parse the entire project
  • -Learn project structure through analysis models
  • -Combine graph signals, embeddings, and learned representations
  • -Build computed understanding
  • -Serve intelligence on demand
  • -LLM gets what it needs, does its job
  • -Accuracy scales with context quality
  • -Works with open-source, mid-tier, or frontier models
  • -Runs locally

From RAG to CAG: A New Architecture for Code Intelligence

Retrieval Augmented Generation (RAG) retrieves external information and adds it to the context. For code, this means searching for relevant files and pasting them alongside the question.

The problem: retrieval is not understanding. Finding the right files doesn't mean the model understands the project. It still has to figure out how components connect, what conventions exist, how changes propagate — from raw file contents alone.

Cognitive Augmented Generation (CAG) solves this at the source. Instead of retrieving files and asking the model to reconstruct the system, Brainlet learns the project's structure through specialized analysis models and serves computed intelligence directly to the LLM.

The LLM doesn't receive files to interpret. It receives knowledge to act on.

Aspect RAG CAG (Brainlet)
Input to LLM Retrieved file contents Computed project intelligence
How context is found Search and retrieval Learned project understanding
Accuracy depends on Model reasoning over retrieved context Context quality and engine knowledge
Cost structure Model cost grows with retries and prompt size Local intelligence improves context before the prompt
Project understanding Model reconstructs from retrieved context Engine maintains a learned project model
Scales with Model cost and retrieval breadth Better context

CAG doesn't replace the LLM. It can make an LLM more effective by solving part of the context problem before the model starts working.

Better context in, better results out.