Getting Started
Install and configure OpenClaw Cortex in about 5 minutes
OpenClaw Cortex is a hybrid semantic memory system for AI agents. It stores memories as both structured metadata and high-dimensional vectors in Memgraph, then retrieves them using a multi-factor scoring algorithm that combines semantic similarity, recency, frequency, type priority, project scope, and graph-traversal signals.
Prerequisites#
Before you begin, make sure you have:
- Docker (for Memgraph)
- Ollama installed and running
- An Anthropic API key (for
capture; not required forstore/recall/search)
Step 1: Install the binary#
Install via the install script:
curl -fsSL https://raw.githubusercontent.com/ajitpratap0/openclaw-cortex/main/scripts/install.sh | bashVerify the installation:
openclaw-cortex --versionOr build from source:
git clone https://github.com/ajitpratap0/openclaw-cortex
cd openclaw-cortex
go build -o bin/openclaw-cortex ./cmd/openclaw-cortex
export PATH="$PWD/bin:$PATH"Step 2: Start Memgraph#
Use the provided docker-compose.yml:
docker compose up -dMemgraph will be available at:
- Bolt:
bolt://localhost:7687(used by openclaw-cortex) - HTTP Lab UI:
http://localhost:7444
Step 3: Pull the embedding model#
ollama pull nomic-embed-textStep 4: Store your first memory#
openclaw-cortex store "Always run tests before merging to main" \
--type rule \
--scope permanent \
--tags ci,testingStep 5: Recall memories#
openclaw-cortex recall "What are the testing requirements?"You should see the memory from Step 4 returned with a relevance score.
Step 6: Capture from a conversation#
For automatic memory extraction from conversations, set your API key:
export ANTHROPIC_API_KEY=sk-ant-...Then capture a conversation turn:
openclaw-cortex capture \
--user "How should I handle errors in Go?" \
--assistant "Always return errors explicitly. Use fmt.Errorf with %w to wrap them for unwrapping. Never use panic for expected error conditions."This sends the conversation turn to Claude Haiku, which extracts structured memories, named entities, and relationship facts, then stores them in Memgraph automatically.
Step 7: Wire up Claude Code hooks#
To get automatic memory injection in every Claude Code conversation, add the hook configuration to .claude/settings.json in your project:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "openclaw-cortex hook pre"
}
]
}
],
"Stop": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "openclaw-cortex hook post"
}
]
}
]
}
}Or use the installer:
openclaw-cortex hook install --project my-projectVerify everything works#
# Health check (verifies Memgraph, Ollama, Claude LLM)
openclaw-cortex health
# Check stats
openclaw-cortex stats
# Search memories
openclaw-cortex search "error handling"Configuration#
The default configuration works out of the box if Memgraph and Ollama are running locally. To customize, create ~/.openclaw-cortex/config.yaml:
memgraph:
uri: bolt://localhost:7687
username: ""
password: ""
ollama:
base_url: http://localhost:11434
model: nomic-embed-text
memory:
dedup_threshold: 0.92
default_ttl_hours: 720OpenClaw Gateway
If you use the OpenClaw Max plan, set claude.gateway_url and claude.gateway_token in your config instead of an Anthropic API key.
Next Steps#
- Read the Configuration reference
- Explore the CLI reference
- Learn about the Architecture
- Set up Claude Code Hooks for automatic memory