Parse
Brainlet scans the codebase, parses files, and extracts definitions, imports, calls, configuration, and relationships across 25 programming languages.
How it works
Brainlet turns a codebase into a computed understanding layer that any LLM can query.
Parse
Code graph
Analyze
Relationships
Train
Project signals
Build
Intelligence
Query
8 tools
Understand
LLM-ready
Brainlet scans the codebase, parses files, and extracts definitions, imports, calls, configuration, and relationships across 25 programming languages.
The engine normalizes those signals into a project graph: modules, dependencies, boundaries, conventions, data flow, and impact paths.
Specialized analysis models learn project-specific structural signals and representations locally. This is not LLM fine-tuning; it is project intelligence built from the codebase itself.
Brainlet combines the graph, embeddings, and learned representations into an intelligence layer that knows how the project is structured and how changes propagate.
Any LLM can query that intelligence through 8 specialized tools. Each tool answers a different class of project question from a different angle.
The LLM receives computed project knowledge instead of raw file chunks, so it can answer, review, or generate code with the project context already prepared.
Retrieval asks the LLM to do the hard work. Brainlet does that work before the prompt is ever built.
Retrieval Augmented Generation (RAG) retrieves external information and adds it to the context. For code, this means searching for relevant files and pasting them alongside the question.
The problem: retrieval is not understanding. Finding the right files doesn't mean the model understands the project. It still has to figure out how components connect, what conventions exist, how changes propagate — from raw file contents alone.
Cognitive Augmented Generation (CAG) solves this at the source. Instead of retrieving files and asking the model to reconstruct the system, Brainlet learns the project's structure through specialized analysis models and serves computed intelligence directly to the LLM.
The LLM doesn't receive files to interpret. It receives knowledge to act on.
| Aspect | RAG | CAG (Brainlet) |
|---|---|---|
| Input to LLM | Retrieved file contents | Computed project intelligence |
| How context is found | Search and retrieval | Learned project understanding |
| Accuracy depends on | Model reasoning over retrieved context | Context quality and engine knowledge |
| Cost structure | Model cost grows with retries and prompt size | Local intelligence improves context before the prompt |
| Project understanding | Model reconstructs from retrieved context | Engine maintains a learned project model |
| Scales with | Model cost and retrieval breadth | Better context |
CAG doesn't replace the LLM. It can make an LLM more effective by solving part of the context problem before the model starts working.
Better context in, better results out.