A blazing fast Terminal User Interface (TUI) client for Ollama with real-time streaming, markdown support, and smart scrolling.
Built with performance and user experience in mind
Responses are generated live, providing immediate feedback without waiting for complete generation.
Automatic formatting for headers, lists, bold text, and syntax-highlighted code blocks with custom borders.
Automatic following of AI output with manual scroll lock (🔒) for reading previous messages undisturbed.
Easily switch between installed Ollama models using arrow keys with separate input/output buffers per model.
Each LLM maintains its own chat history, input text, and scroll position for seamless multi-model workflows.
Every chat session is automatically saved as text files with both combined and per-model histories.
Familiar keyboard shortcuts for efficient navigation and control, designed for power users.
Ultra-low latency and minimal resource footprint using Rust and the Ratatui terminal framework.
Get up and running in minutes
# From crates.io (recommended)
cargo install lazyllama
# Or build from source
git clone https://github.com/Pommersche92/lazyllama.git
cd lazyllama
cargo install --path .
lazyllama
That's it! Start chatting with your AI models right in your terminal.
Designed for efficiency and productivity
Everything you need to get the most out of LazyLlama