Embeddings & Vectors
import { Tabs, TabItem, Aside } from ‘@astrojs/starlight/components’;
Cortex uses vector embeddings to power its Semantic Search and Intelligent Context Routing features.
Local-First with Ollama
By default, Cortex attempts to use Ollama for local embeddings. This ensures your data never leaves your machine.
Requirements
- Ollama installed and running.
- The
nomic-embed-textmodel pulled:Terminal window ollama pull nomic-embed-text
Configuration
Cortex automatically detects Ollama if it’s running on http://localhost:11434.
Cloud Fallback with OpenAI
If local embeddings are unavailable or too slow, you can use OpenAI.
Configuration
Set the OPENAI_API_KEY environment variable in your shell or .env file.
export OPENAI_API_KEY="sk-..."Advanced: SQLite-Vec
Cortex is moving towards native, high-performance vector storage using sqlite-vec.
Current Status
In v0.8.0, semantic search uses a hybrid approach:
- Keyword search via FTS5 for precision.
- In-memory cosine similarity for semantic relevance.