Notepatra Notepatra v0.1.67 docs

Notepatra Documentation

A native C++/Rust code editor with local-first AI — six backends in the dropdown (Ollama, llama.cpp / Ollama Cloud, OpenRouter, OpenAI, Azure OpenAI), plus any OpenAI-compatible server (LM Studio, Jan, vLLM, KoboldCpp, llamafile, text-generation-webui) reachable via the llama.cpp entry. Fast, portable, free forever under GPL-3.0. This is the complete reference for installing, using, customizing, and extending Notepatra.

Introduction

Notepatra is a native code editor written in C++17 with a Rust core for the heavy-lifting (memory-mapped file I/O, Aho-Corasick search, Myers diff, formatters). It targets Linux x64 / ARM64, macOS Apple Silicon, and Windows x64. The bare executable is ~9 MB stripped on each platform; downloads range 3.0 MB (Linux ARM64) to ~39.9 MB (Windows MSI with Qt DLLs bundled).

Unlike Electron-based editors, Notepatra does not ship a browser runtime. It uses Qt5 + QScintilla — a battle-tested Scintilla-based editor engine — so 226 file types, 92 language lexers, brace matching, folding, and code completion are built in.

Unlike cloud-first editors, Notepatra's AI is local-first. The AI dock dropdown ships six backends as of v0.1.55: Ollama (default · localhost:11434), llama.cpp's llama-server (loads any GGUF directly · localhost:8080), OpenRouter (cloud · 100+ models — Claude, GPT, Gemini, Grok, Kimi, …), Ollama Cloud (cloud · gpt-oss:120b, qwen3-coder:480b, deepseek-v3.1:671b), OpenAI direct, and Azure OpenAI (enterprise). To use any other OpenAI-compatible server (LM Studio, Jan, vLLM, KoboldCpp, llamafile, text-generation-webui), pick the llama.cpp entry and set the base URL in Settings → Preferences → AI. Nothing leaves your computer unless you pick a cloud backend. No mandatory API keys, no subscription, no telemetry.

The AI runs as an agent. Tick Coding Mode in the AI dock and the model gets 5 native tools — read_file, list_dir, search, write_file, apply_diff — plus three-layer path safety, a 25-call hard cap per turn, and persistent per-workspace chat history. See Coding Mode (agentic) below for the full surface.

What "0.1.x" means. Notepatra is built by a one-person team in the open and is not yet feature-complete relative to mature editors. The roadmap is deliberately public. Latest release is v0.1.72 (May 2026). See the FAQ for what's shipped vs. planned.

Install

Quick install (recommended)

Linux

curl -sL https://notepatra.org/install.sh | bash

Downloads the latest release for your architecture (x64 or ARM64), verifies SHA-256, installs to ~/.local/bin/notepatra, creates a .desktop entry, and registers the hicolor icon theme. Rerun to upgrade.

macOS (Apple Silicon only)

curl -sL https://notepatra.org/install.sh | bash

Same script. Downloads .dmg, mounts it, copies Notepatra.app to /Applications, unmounts. Intel Macs are not shipped pre-built — build from source.

Windows (PowerShell)

irm https://notepatra.org/install.ps1 | iex

Downloads the latest notepatra-setup-<version>.exe, runs the NSIS installer which registers in Settings → Apps → Installed apps, creates Start Menu + optional Desktop shortcut, optionally adds to PATH. Uninstall via Control Panel works.

Manual download

Every release is at github.com/singhpratech/notepatra/releases/latest.

PlatformAssetSize
🐧 Linux x64notepatra-linux-x64.tar.gz~3.2 MB
🐧 Linux ARM64notepatra-linux-arm64.tar.gz~3.0 MB
🍎 macOS Apple Siliconnotepatra-macos-arm64.dmg~26.6 MB
🪟 Windows x64 (MSI)notepatra-<version>.msi~39.9 MB
🪟 Windows x64 (installer)notepatra-setup-<version>.exe~32.0 MB
🪟 Windows x64 (portable)notepatra-windows-x64.zip~36.5 MB
Linux: Qt5 required. The Linux tarball is just the bare binary — Qt5 is expected to be installed system-wide. On Ubuntu/Debian: sudo apt install qtbase5-dev libqscintilla2-qt5-dev. On Fedora: sudo dnf install qt5-qtbase qscintilla-qt5. On Arch: sudo pacman -S qt5-base qscintilla-qt5. Mac/Windows downloads bundle Qt.

Uninstall

Notepatra installs cleanly and uninstalls cleanly. Pick your platform and the path you used to install. Every command below is copy-paste ready.

🛡 Privacy guarantee. Notepatra never sends telemetry. Uninstalling removes only the binary, shortcuts, and your local config — nothing was ever phoned home, so there is nothing to revoke. No accounts, no API keys, no cloud state, no analytics. If you used the AI Assistant it connects to your local Ollama daemon; uninstalling Notepatra does not remove Ollama or any pulled models — manage those with the ollama CLI.

🍎 macOS (Sonoma · Sequoia · Tahoe)

One-liner

curl -fsSL https://notepatra.org/uninstall.sh | sh

Auto-detects macOS and removes the app, CLI symlink, and user data. Uses sudo if /Applications is not writable.

Manual (any install path)

# Remove the app (sudo only if /Applications is not writable)
sudo rm -rf /Applications/Notepatra.app
#   …or if you installed to ~/Applications instead:
rm -rf ~/Applications/Notepatra.app

# Remove the CLI symlink
rm -f ~/.local/bin/notepatra

# Remove user config + cache + saved state
rm -rf ~/.config/notepatra
rm -rf ~/Library/Preferences/com.notepatra.editor.plist
rm -rf ~/Library/Saved\ Application\ State/com.notepatra.editor.savedState
rm -rf ~/Library/Caches/com.notepatra.editor

What Notepatra does NOT create

🐧 Linux (Ubuntu · Debian · Fedora · Arch · any distro)

One-liner

curl -fsSL https://notepatra.org/uninstall.sh | sh

Manual (install.sh path)

# Remove the binary
rm -f ~/.local/bin/notepatra

# Remove the desktop entry
rm -f ~/.local/share/applications/notepatra.desktop

# Remove icons (all sizes)
for sz in 16 32 48 64 128 256; do
  rm -f ~/.local/share/icons/hicolor/${sz}x${sz}/apps/notepatra.png
done
gtk-update-icon-cache ~/.local/share/icons/hicolor 2>/dev/null

# Remove user config + cache + recovery
rm -rf ~/.config/notepatra

# Refresh the application menu so the launcher disappears
update-desktop-database ~/.local/share/applications 2>/dev/null

Manual (tarball extracted somewhere else)

# Find where you put it
which notepatra
# /opt/notepatra/notepatra      ← example

# Remove that directory
sudo rm -rf /opt/notepatra
rm -f ~/.local/share/applications/notepatra.desktop
rm -rf ~/.config/notepatra

If you built from source

cd /path/to/notepatra-build
sudo cmake --build . --target uninstall   # if you ran `make install`
# Or just delete the source tree:
cd .. && rm -rf notepatra

What is NOT touched

🪟 Windows 10 · 11

If you installed via the NSIS installer (notepatra-setup-X.Y.Z.exe)

Easiest: use Windows Settings.

  1. Open Settings (Win+I)
  2. Apps → Installed apps
  3. Search for Notepatra
  4. Three-dot menu → Uninstall
  5. The NSIS uninstaller runs → Next → Uninstall

Alternative: run the uninstaller directly from %LOCALAPPDATA%\Notepatra\uninstall.exe.

The uninstaller automatically removes:

One-liner (PowerShell)

irm https://notepatra.org/uninstall.ps1 | iex

Manual (PowerShell)

# Remove install directory (Qt DLLs + .exe + uninstall.ps1)
Remove-Item -Recurse -Force "$env:LOCALAPPDATA\Notepatra"

# Remove Start Menu + Desktop shortcuts
Remove-Item -Force "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Notepatra.lnk"
Remove-Item -Force "$env:USERPROFILE\Desktop\Notepatra.lnk"

# Remove from user PATH
$p = [Environment]::GetEnvironmentVariable("PATH", "User") -split ';' |
     Where-Object { $_ -notlike "*Notepatra*" } | Where-Object { $_ }
[Environment]::SetEnvironmentVariable("PATH", ($p -join ';'), "User")

# Remove the Installed Apps registry entry
Remove-Item -Recurse -Force "HKCU:\Software\Microsoft\Windows\CurrentVersion\Uninstall\Notepatra"

# Remove user config + saved state
Remove-Item -Recurse -Force "$env:USERPROFILE\.config\notepatra"
Remove-Item -Recurse -Force "$env:APPDATA\Notepatra"

Portable zip

If you ran notepatra-windows-x64.zip, no installer was run — just delete the folder you extracted.

Remove-Item -Recurse -Force C:\path\to\notepatra-folder

What is NOT touched

First launch

Launch Notepatra from your start menu / Launchpad / notepatra command. You'll see an empty editor with the menu bar, toolbar, tab bar, status bar, and (collapsed) side panels.

The first time you run Notepatra, it creates ~/.config/notepatra/ (Linux/macOS) or %LOCALAPPDATA%\Notepatra\ (Windows) and writes a default config.json. Themes, fonts, tab width, word wrap, etc. are persisted there.

Lite mode vs Full mode (v0.1.64+)

Starting in v0.1.64, Notepatra ships two build flavors per release. Lite is the default download — a bare ~9 MB binary with no heavy dependencies bundled. Full is the opt-in flavor that includes QtWebEngine for inline Vega-Lite chart rendering (≈ 95 MB extra). Heavy features go through packs that you install on demand, the way DuckDB ships a tiny core plus optional extensions (httpfs, parquet, spatial, etc.).

Why this split?

v0.1.63 bundled QtWebEngine into every release for one specific feature — the AI Data Analyst's inline chart card. Most users will never trigger a chart, but every install paid the disk + download cost. v0.1.64 flipped the default: lite by default, charts move to an optional pack. Same architecture mirrors how Cursor / VS Code / Sublime add language servers and renderers — small core, install what you need.

FlavorBare binaryWhat you getWhat's missing
Lite (default) ~9 MB stripped Everything: editor, lexers, AI Assistant, Coding Mode, Data Analyst tool calls, Git, REST client, hex editor, plugins, themes. Inline Vega-Lite chart rendering. The Data Analyst can still generate chart specs — they show as a "Charts Pack required" card with [View JSON] to copy the spec out.
Full (opt-in) ~95 MB bundled Everything in Lite, plus the QtWebEngine-backed chart renderer that paints Vega-Lite specs inline in the chat transcript.

Packs — what's installed where

The plugin loader (src/plugin_loader.{h,cpp}) tracks which packs are available. Each pack lives under your per-user plugin directory:

v0.1.66 ships with the charts pack bundled into the Full flavor (it's the same binary — the pack is "installed" if WebEngine was linked at compile time). v0.1.66+ adds in-app download / SHA-256 verification / runtime QPluginLoader activation so packs can be added without swapping binaries.

Pack IDSizeWhat it doesStatus
charts ≈ 95 MB Renders Vega-Lite charts (bar / line / scatter / area / composite) inline in the chat transcript. Powered by QtWebEngine + vega-embed. Bundled in the Full flavor as of v0.1.66 (Linux x64 / Linux ARM64). macOS / Windows Full builds land in v0.1.66.
pdf ≈ 12 MB Rasterises PDFs to PNG pages so vision-capable AI models can read them. Powered by Poppler-Qt5. Planned for v0.1.67. Until then, dragging a PDF into the AI dock shows the "use a vision-capable model" error bubble.

Upgrading from Lite to Full

Three paths, pick whichever matches your situation:

Path A — Click [Install charts pack] in the app

When you ask the AI Assistant to chart something in Data Analyst mode, the lite binary shows the 📊 Chart rendering requires the Charts Pack card with two buttons:

Path B — Download the Full flavor manually

On the GitHub Releases page pick the file with -full in the name:

Path C — Build from source with -DNOTEPATRA_WITH_WEBENGINE=ON

Works on any platform with QtWebEngine dev headers available:

Then:

git clone https://github.com/singhpratech/notepatra.git
cd notepatra
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DNOTEPATRA_WITH_WEBENGINE=ON
make -j$(nproc)
./notepatra

The binary will report a normal version string but the chart renderer's WebEngine code path is now compiled in, so the Data Analyst's chart cards render inline instead of showing the install prompt.

FAQ

UI overview

Editor basics

The editor is QScintilla 2.14.1 (official Riverbank source) under the hood: folding, brace matching, auto-indent, bookmarks, multi-cursor (hold Alt), rectangular selection, code completion, and snippets are all built in.

Opening & saving

File-size limits

Notepatra uses the Rust core's memory-mapped file I/O for large files. Limits:

Languages & lexers

Notepatra ships 92 language lexers as of v0.1.66 (covering 226 file extensions). The Language menu is split into a narrow two-tier layout — Common + SQL Dialects at the top, More Languages as an alphabetical submenu of the rest. Extensions are auto-detected from the filename.

Core (Qt-bundled): Python, JavaScript/TypeScript/JSX, CoffeeScript, C/C++, C#, D, Java/Kotlin, HTML/PHP, CSS/SCSS/LESS, XML, JSON, SQL (ANSI / T-SQL / PL/SQL / MySQL / PostgreSQL / SQLite), Bash, Batch, Ruby, Perl, Lua, TCL, Fortran, Matlab, Octave, IDL, NASM, MASM, Verilog, VHDL, TeX, PostScript, POV, Spice, AVS, Properties, PO, IntelHex, SRecord, Markdown, YAML, Diff, Pascal, CMake, Makefile.

v0.1.55 additions (32 dedicated lexers, keyword tables verified against official specs): Dart · Solidity · Zig · Vala · Hack · Julia · R · Protobuf · F# · HCL/Terraform · Thrift · GraphQL · GDScript · Nim · Cython · Mojo · Crystal · Elixir · Scala · Groovy · Apex · Jinja · Liquid · Twig · Dockerfile · Fish · Nushell · TOML · DotEnv · Gitignore · JSON5 · BibTeX. Each ships with comment / uncomment / block-comment syntax (Ctrl+Q / Ctrl+Shift+Q) and proper file-extension routing — .dart, .zig, .jl, .toml, .ex, Cargo.toml, Dockerfile, .gitignore, .env, .fish, .json5, .tf, etc. now point at their dedicated lexers instead of falling back to the closest-fit generic.

Rust / Go / Swift still use the C++ lexer base — brace/bracket handling is correct but language-specific keywords are generic. Dedicated lexers for these are planned.

Themes & palette

Three themes ship: Light (default), Dark, and Monokai. Change via Settings → Theme. The theme is persisted in config.json.

Every theme applies a curated 9-hue palette: keywords blue bold (#0000FF), types violet (#8000FF), comments green italic (#008000), numbers orange (#FF8000), strings grey (#808080), operators navy bold (#000080), preprocessor brown (#804000), classes maroon (#7F0000), identifiers plain (default text). Dark theme uses Zenburn-derived hues (warm sand keywords, sage types, rose strings, olive operators, peach preprocessor) so each token kind is on a distinct hue arc — no more "all blue shades" effect. The palette is driven by lexer->description(i) — it walks every style slot in every lexer, matches substrings like "keyword", "comment", "string", and paints accordingly. Works across all 92 lexers without hard-coding.

Rulers & crosshair overlay

Optional visual aids for long files and precise cursor placement:

Find & replace

5-tab dialog: Find, Replace, Find in Files, Mark, Go to.

A separate, full-tab search dedicated to scanning an entire folder tree. Designed for the "where is this used across the whole project?" workflow — different from the inline Find & Replace dialog (which targets the current document or open tabs). Streams results live as files are scanned; safe on multi-GB log files.

What it searches

Query modes

Engine

Plain-text searches under ~50 MB per file route through a Rust Aho-Corasick fast path (multiple patterns at once, sub-millisecond on small files). Larger files and regex queries use a streaming line-by-line scanner. Each match reports line:column (column is 1-based and counted in characters, not bytes — UTF-8 multi-byte sequences count as one).

Results

Results stream into a tree-grouped panel: one node per file with hits, expandable to per-line snippets. Double-click any hit to jump the editor to the exact line and column. Live counters show files scanned, matches so far, and elapsed time. Cancel the search any time — the worker thread bails on the next file boundary.

Tip: for a quick "is this string anywhere in my project?" use the default plain-text mode with the phrase typed as-is. Regex is only worth the syntax overhead when you actually need backtracking, alternation, or character classes.

Macros

Record keyboard + mouse operations and play them back:

Uses QScintilla's built-in QsciMacro serialization.

Sessions & crash recovery

On close, Notepatra saves ~/.config/notepatra/session.json: list of open file paths, cursor positions, window geometry, maximized state, theme, active tab. On next launch, all tabs reopen at the same cursor positions.

Every 10 seconds, Notepatra writes unsaved buffers to ~/.config/notepatra/recovery/<crash-id>.txt. If the app crashes (caught by SIGSEGV/SIGABRT/SIGFPE handlers), the next launch detects .crash_flag and offers to restore unsaved work.

JSON Tools plugin

Open: Plugins → JSON Tools (inbuilt). Panel opens as a new tab.

Buttons

Session log

Below the Scintilla output, a small list widget records every action taken during the session with before/after char counts, a delta, and a smart description:

[14:22:31] Format: 60 → 86 chars (+26)           +2 commas, +1 brace
[14:22:45] AI Fix (Ollama): 60 → 63 chars (+3)   +2 commas  [5 lines]
[14:23:02] Minify: 98 → 60 chars (-38)

Color-coded: teal for "added/fixed", amber for "shrunk/minified", gray for no-op. Capped at 50 entries, scrollable.

HTML Tools plugin

Same structure as JSON Tools but with HTML-specific buttons: Format (2 spaces), Format (4 spaces), Minify, Fix + Format, AI Fix, Show Diff, Copy Output. Fixer handles: self-closing <img>/<br>, mismatched tags, attribute quotes.

Bracket Tools plugin

Generic bracket/quote balancer. Works on any language. Uses the Rust core's bracket_fix module which walks the input, tracks a bracket stack, and closes any unclosed opens at the end in reverse order.

SQL Formatter plugin

Open: Plugins → SQL Formatter (inbuilt).

Dialect-specific keyword sets are fed to Scintilla via SCI_SETKEYWORDS (since QsciLexerSQL::setKeywords is protected). T-SQL adds DECLARE/MERGE/OUTPUT/PIVOT/OVER/PARTITION, PL/SQL adds PLS_INTEGER/SYSDATE/NVL/DECODE/CONNECT BY, etc.

Compare

A side-by-side diff view shipped under Plugins → Compare. Backed by Rust's Myers diff (similar crate) and a custom CompareWidget rendered with Qt.

Visual UX inspired by ComparePlus by Pavel Nedev. Notepatra's implementation is a fresh Qt + Rust port in a different codebase, but the visual conventions are credit to Pavel.

How it works

  1. Pick a left and right file (open tab, unsaved tab, or file on disk).
  2. Rust's Myers diff (similar crate) produces a list of Equal / Insert / Delete entries.
  3. Consecutive Delete + Add blocks are paired into "modified" rows at the same visual line.
  4. Within paired modified rows, a common-prefix + common-suffix detector finds the exact differing bytes and highlights only those characters — not the whole line.

Visual markers

Row kindBackgroundSymbol marginCharacter highlight
Equal / contextwhite
Modified (paired)pale yellow #FFFBE6pink ~ (Circle)red box on left, green box on right — only differing chars
Added (right only)mint green #D4F4D4green + (Plus)
Deleted (left only)salmon #F4D4D4red − (Minus)
Placeholder (empty on one side)light blue #E8F0F8green + in margin

Line numbers

Each panel uses a TextMargin with custom per-row text. The LEFT panel shows the original LEFT-source line numbers; the RIGHT panel shows the original RIGHT-source line numbers. They diverge cleanly when there are insertions/deletions — the empty placeholder on one side shows a green + instead of a number.

Toolbar

Scroll sync

Both vertical and horizontal scrollbars are mirrored. Drag either side's scrollbar and both panels move together. Qt's built-in valueChanged signal is naturally cycle-safe because setValue(x) is a no-op when x == current.

Git integration

Opens as a dockable panel. Shows:

Shells out to git via QProcess. Non-invasive — no libgit2 dependency.

Terminal

Built-in terminal panel (Linux/macOS). Uses QProcess to spawn bash/zsh and feeds stdout/stderr to a monospace widget. Not a full pty (no colors, no curses) — fine for simple commands and scripts, not for vim/htop.

REST client

A Postman-lite panel: URL field, method dropdown (GET/POST/PUT/PATCH/DELETE/HEAD), headers editor, request body editor with JSON syntax highlighting, response viewer with pretty-printed JSON output + status code + timing.

Hex editor

Byte-level view of any file. Left column: offset. Middle: hex bytes in 16-byte rows. Right: printable ASCII. Read-only for now.

Markdown preview

Live side-by-side preview for .md/.markdown files. Uses Qt's QTextDocument::setMarkdown() — CommonMark-compatible. Scrolls in sync with the source.

AI integration overview

Notepatra's AI is local-first by default with optional cloud backends. As of v0.1.55, six backends are available — pick from the AI dock dropdown:

BackendTypeProtocol / URLWhat it gives you
Ollama (default)LocalNative /api/generate · http://localhost:11434Easiest to install. One-line install, auto-detects models with /api/tags. /api/show capability probe (v0.1.55) auto-detects which models support tools / thinking / vision.
Ollama CloudCloudOpenAI-compat over HTTPS · https://ollama.comSame Ollama models, served from Ollama's hosted infrastructure. Per-provider key slot in Settings.
llama.cppLocalOpenAI /v1/chat/completions · http://localhost:8080Loads any GGUF file directly — no daemon, no config format. Maximum control + minimum overhead.
OpenRouterCloudOpenAI-compat · https://openrouter.ai/apiOne key, hundreds of models (Anthropic, OpenAI, Google, Mistral, Meta, xAI, DeepSeek, Qwen). Unified reasoning field — Think checkbox toggles thinking on Claude / o-series / Gemini consistently.
OpenAICloudOpenAI /v1/chat/completions · https://api.openai.comDirect OpenAI access for GPT-4 family + o-series. Per-provider key slot.
Azure OpenAICloudAzure deployment URL with ?api-version=...Enterprise OpenAI through your Azure subscription. Configure resource name + deployment name + API version in the dedicated Azure section of the Settings dialog.

If the selected backend isn't reachable, AI panels show a backend-specific banner (e.g. "Ollama not running — start it: ollama serve", or for cloud backends "OpenRouter key not set — open Settings"). The cloud-config banner is red when keys are missing for the selected backend, green when configured.

Per-provider key slots (v0.1.55). The AI Settings dialog has a 4-section layout — OpenRouter, OpenAI, Ollama Cloud, Azure OpenAI — each with its own Test / Save / Forget buttons and key slot. Strict no-cross-provider lookup: the OpenRouter backend will never accidentally use the OpenAI key, and vice-versa. Legacy single-key configs migrate automatically by sniffing the prefix (sk-or- → OpenRouter, sk- → OpenAI).

Searchable model dropdown (v0.1.55). Type any provider key (openai, anthropic, google, xai) or alias (grok, claude, gpt, gemini, kimi, qwen) and the dropdown filters to that provider's models — works whether you remember the brand name or the model name.

AI surfaces in Notepatra:

Privacy: "Share file with AI" toggle (v0.1.55). A red lock indicator next to the AI dock title controls whether the file currently in the editor is included in the AI's context. Default OFF. Available only in Coding Mode — Chat and Data Analyst modes never see file content. The system prompt teaches the model to politely instruct you to enable the toggle if asked "can you see this file?".

Credential scrubber (v0.1.55). Every editor-derived chunk (selection, workspace block, attached file, user input) passes through a 14-pattern redactor before leaving the machine — OpenRouter / Anthropic / OpenAI / GitHub / GitLab / AWS / Slack / Stripe / SendGrid / Google API / JWT / PEM private-key blocks / generic password=/api_key=/token=. Replacement is in-place with a [REDACTED-VENDOR-KEY] marker so the model still sees that a credential was redacted.

Ollama setup

  1. Install Ollamacurl -fsSL https://ollama.com/install.sh | sh (Linux/macOS) or download from ollama.com (Windows).
  2. Start the daemonollama serve. Listens on http://localhost:11434 by default.
  3. Pull at least one model. Recommended starter:
    • ollama pull qwen2.5:7b — 4.7 GB, general-purpose, good at code
    • ollama pull llama3.2:3b — 2 GB, smaller, faster
    • ollama pull codellama:7b — 3.8 GB, code-specialized
    • ollama pull qwen2.5-coder:7b — 4.4 GB, code-specialized + recent
  4. Launch Notepatra → open any AI panel → the model dropdown auto-populates with whatever you pulled.
Memory check. 7B models need ~6 GB RAM, 13B ≥ 10 GB, 34B ≥ 24 GB. For CPU-only / 16 GB laptops, Notepatra auto-picks the smallest installed model in this priority order: qwen2.5-coder:3b → qwen2.5:3b → gemma2:2b → gemma3:4b → llama3.2:3b.

llama.cpp setup (GGUF)

llama.cpp runs any .gguf file directly — no daemon, no config format. Best if you have specific GGUF models from Hugging Face and want maximum control.

  1. Install llama.cppbrew install llama.cpp (macOS) or build from the repo (Linux / Windows).
  2. Download a GGUF model from huggingface.co. Try Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf (~2 GB, excellent for code).
  3. Run the serverllama-server -m path/to/model.gguf --port 8080. Exposes OpenAI-compatible endpoints at http://localhost:8080/v1/.
  4. In Notepatra → Settings → Preferences → AI → pick "llama.cpp" → Save. Done.

AI backend setup — full guide

The condensed setup blocks above cover the happy path. This guide walks every supported backend through install, first-run, and the most common failure modes — per OS where the steps differ. Use it when something doesn't work or you're setting up a new machine.

Ollama (recommended for beginners)

One installer, auto-discovery of models, native HTTP API on port 11434. Works on every desktop OS Notepatra runs on.

🐧 Linux

  1. Install — one-liner provided by Ollama:
    curl -fsSL https://ollama.com/install.sh | sh
    The installer drops a binary in /usr/local/bin/ollama and registers a systemd unit ollama.service.
  2. Daemon — the installer starts and enables it. Check status:
    systemctl status ollama
    journalctl -u ollama -f         # follow logs
    If your distro has no systemd (Void, Artix, Alpine, WSL1), run ollama serve in a terminal you keep open.
  3. Pull a model:
    ollama pull qwen2.5-coder:7b
  4. Verify:
    ollama list                     # should show the model
    curl http://localhost:11434/api/tags    # JSON list of installed models
  5. Notepatra config — open the AI dock, pick "Ollama" in the backend dropdown. The model dropdown auto-populates. No URL field to set; http://localhost:11434 is hard-wired.

🪟 Windows

  1. Install — download OllamaSetup.exe from ollama.com/download/windows and run it. Installs to %LOCALAPPDATA%\Programs\Ollama and starts a tray icon at boot.
  2. Daemon — the tray icon means it's running. Right-click the tray icon → "Quit Ollama" to stop. Re-launch from the Start menu.
  3. Pull a model — open PowerShell or cmd:
    ollama pull qwen2.5-coder:7b
  4. Verify:
    ollama list
    curl http://localhost:11434/api/tags
  5. Notepatra config — same as Linux: pick "Ollama" in the backend dropdown.

🍎 macOS

  1. Install — either the .dmg from ollama.com/download/mac, or via Homebrew:
    brew install --cask ollama
  2. Daemon — launching the Ollama app puts an icon in the menu bar and starts the server. Quit from the menu-bar icon to stop.
  3. Pull a model — open Terminal:
    ollama pull qwen2.5-coder:7b
  4. Verify:
    ollama list
    curl http://localhost:11434/api/tags
  5. Notepatra config — pick "Ollama" in the backend dropdown.

Troubleshooting

"Ollama not running" banner won't go away

Notepatra polls http://localhost:11434 and the banner stays red while the probe fails. Causes:

Models don't appear in Notepatra dropdown

Ollama is reachable (banner is green) but the dropdown is empty or stale.

Model pulled but capability probe says no tools

v0.1.55 introduced an /api/show capability probe — Coding Mode hides models that can't call tools. If a model you expect to support tools is greyed out:

Out of memory on 7B+ models

Symptom: Ollama logs show llama_model_load: error loading model: failed to allocate buffer, or the daemon segfaults mid-generation.

llama.cpp (power users)

Maximum control. You pick the GGUF, you pick the quant, you pick the GPU layers. No daemon, no model registry — just llama-server with your file. Default port 8080, OpenAI-compatible at /v1.

🐧 Linux — build from source

  1. Clone and build:
    git clone https://github.com/ggml-org/llama.cpp
    cd llama.cpp
    cmake -B build -DGGML_CUDA=ON     # or -DGGML_VULKAN=ON, or omit for CPU-only
    cmake --build build --config Release -j
    Binaries land in build/bin/. Add to PATH or copy llama-server somewhere in your $PATH.
  2. Verify:
    llama-server --version

🪟 Windows — Releases zip

  1. Download the latest pre-built zip from github.com/ggml-org/llama.cpp/releases. Pick llama-bXXXX-bin-win-cuda-x64.zip if you have an NVIDIA GPU, or ...-vulkan-... for AMD/Intel, or ...-avx2-... for CPU-only.
  2. Extract to a folder you'll remember (e.g. C:\llama.cpp) and add it to your PATH environment variable, or always invoke with the full path.
  3. Verify in PowerShell:
    llama-server.exe --version

🍎 macOS — Homebrew

  1. Install:
    brew install llama.cpp
    Apple Silicon Macs get Metal acceleration by default — no extra flags needed.
  2. Verify:
    llama-server --version

Picking a GGUF

GGUF is a quantised single-file model format. Smaller files run faster but lose accuracy. The sweet spot for 7B-13B models is Q4_K_M; pick higher only if you have spare VRAM.

QuantSize (7B)QualityWhen to pick
Q4_K_M~4.4 GBGoodDefault. Fits on 8 GB RAM / 6 GB VRAM. Recommended.
Q5_K_M~5.1 GBBetterNoticeably sharper on coding. Needs ~10 GB free.
Q8_0~7.7 GBNear-losslessIf you have 16 GB+ and want max fidelity short of full precision.
F16 / BF16~14 GBLosslessResearch / fine-tuning. Overkill for chat.

For ready-to-use quants, bartowski on Hugging Face publishes consistent re-quants of every popular release with all sizes side by side.

Running llama-server

Minimum command — just point at the GGUF:

llama-server -m /path/to/Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf --port 8080

Useful flags:

In Notepatra, pick "llama.cpp" in the backend dropdown. The connectivity dot turns green and the model name (read from the running server) appears in the dropdown.

Troubleshooting

"llama-server not running" — but it IS running

Connectivity probe to http://localhost:8080/v1/models failed. Causes:

Curated catalog showing instead of my loaded model

The dropdown lists Notepatra's curated catalog rather than the model you loaded.

Cloud backends (OpenAI, OpenRouter, Anthropic, Azure, Ollama Cloud)

Cloud backends need a key. Notepatra has per-provider key slots — keys never cross-pollinate between providers. Open Settings → Preferences → AI to see the four sections (OpenRouter, OpenAI, Ollama Cloud, Azure OpenAI), each with its own Test / Save / Forget buttons.

OpenRouter (recommended cloud — one key, many models)

  1. Sign up at openrouter.ai. Add a few dollars credit; pay-as-you-go per token, no subscription.
  2. Create a key — Account → Keys → Create. Copy the sk-or-v1-... string.
  3. Paste in Notepatra — Settings → AI → OpenRouter section → API Key field → click Test. Green = good. Click Save.
  4. Pick "OpenRouter" in the backend dropdown. The model dropdown lists hundreds of models — type claude, gpt, gemini, grok, kimi, qwen, etc. to filter.

OpenAI direct

  1. Get a key at platform.openai.com/api-keys. The string starts with sk- (or sk-proj- for project keys).
  2. Paste in Settings → AI → OpenAI section → Test → Save.
  3. Pick "OpenAI" in the backend dropdown. Curated list of GPT-4 family + o-series; type o1 or gpt-4 to filter.

Azure OpenAI deployment

Azure exposes OpenAI through your tenant — you pay your Azure bill, not OpenAI directly. Setup is fiddlier because you address a deployment not a model.

  1. Get the three values from the Azure portal:
    • Resource name — e.g. my-openai-east (the part before .openai.azure.com).
    • Deployment name — what you called the deployment when you created it. NOT the same as the model name. Example: gpt-4o-prod.
    • API version — e.g. 2024-08-01-preview. Use the latest stable from the Azure docs.
    • Key — Azure portal → your resource → Keys and Endpoint.
  2. Settings → AI → Azure OpenAI — fill all four fields. Notepatra builds the URL https://{resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions?api-version={version} for you.
  3. Test — green = the deployment answered. Save, then pick "Azure OpenAI" in the backend dropdown.

Ollama Cloud

Same Ollama models, served from Ollama's hosted infra. Useful when you want Ollama's catalog without the local RAM cost.

  1. Sign up at ollama.com and create an API key in your account settings.
  2. Settings → AI → Ollama Cloud — paste the key → Test → Save.
  3. Pick "Ollama Cloud" in the backend dropdown. The model list is the cloud catalog.

Per-provider key management

Every key slot has three buttons:

Notepatra never uses a key from one provider against another. Selecting OpenRouter as the backend with no OpenRouter key set will show "OpenRouter key not set", even if your OpenAI key is configured.

Reasoning / thinking models

The Think checkbox in the AI dock toggles reasoning mode. Notepatra normalises the protocol per provider:

Troubleshooting

"Key invalid" — but it works in curl

Test button reports failure even though the same key works from a terminal.

Rate-limit errors / 429

Provider replies with HTTP 429.

Model not in dropdown after refresh

You expect a specific model and it's not in the list, even after clicking refresh.

Cross-cutting troubleshooting

Backend-agnostic problems and their fixes.

AI panel says "thinking…" forever
Stop button doesn't stop
Coding mode sends but no tool calls fire
Capabilities probe never returns
Where Notepatra stores AI config

OpenAI-compat (LM Studio, Jan, vLLM, KoboldCpp, llamafile, OpenRouter)

Any local server speaking the OpenAI /v1/chat/completions API works. Paste its URL in Settings → Preferences → AI → OpenAI-compat → Base URL. If the server requires an API key (e.g. OpenRouter, OpenAI itself), paste it in the API Key field — it's sent as a Authorization: Bearer header. Ollama and llama-server ignore it.

ServerTypical URLNotes
LM Studiohttp://localhost:1234GUI for GGUF — download models with one click.
Janhttp://localhost:1337Cross-platform, open source, great UX.
vLLMhttp://localhost:8000Fastest for batch inference on GPUs.
KoboldCpphttp://localhost:5001GGUF loader with roleplay features.
llamafilehttp://localhost:8080Single-file self-contained GGUF executable.
text-generation-webuihttp://localhost:5000Power-user web UI for local models.
OpenRouterhttps://openrouter.ai/apiCloud proxy — requires API key.

AI Fix pipeline

The AI Fix button in JSON Tools / HTML Tools / Bracket Tools sends the broken content to Ollama with a strict prompt. Here's exactly what happens:

Request

POST http://localhost:11434/api/generate
Content-Type: application/json

{
  "model": "qwen3.5:9b",
  "prompt": "Fix ONLY the broken parts of this JSON. Make MINIMAL
             changes. PRESERVE the original line order, key order, and
             formatting. Do NOT reorder keys. Do NOT reformat. Return
             ONLY the corrected JSON.\n\nBROKEN JSON:\n{...}",
  "system": "You are a minimal-change JSON patcher. ... /no_think",
  "stream": true,
  "think": false,
  "options": { "temperature": 0.1, "num_predict": 4096 }
}

Response cleanup pipeline

Models don't always follow instructions. The response goes through a defensive cleanup pipeline before being displayed:

  1. Strip <think>...</think> blocks — defensive regex strip for models that emit reasoning despite think: false.
  2. Strip markdown ``` fences — if the response starts with ```, find the first newline and the last fence and keep the middle.
  3. Trim leading prose — find the first { or [ in the response and discard everything before. Handles "Here is the fixed JSON: {...}".
  4. Format with Rust's serde_json — if parseable, pretty-print with 4-space indent. Otherwise show raw cleaned text.

Why the strict prompt matters

Without explicit instructions, models love to reformat and reorder keys alphabetically. That's useless for a diff — every line looks different even when only one comma was missing. The strict prompt tells the model to preserve the original line order, key order, and indentation, and patch only the broken parts. Show Diff then shows just the actual fixes.

AI Assistant dock

Open: Ctrl+Shift+A, View → AI Assistant, or from the status bar. The panel is a persistent right-side dock (not an editor tab) — one conversation, preserved across tab switches. Layout:

Streaming tokens flow into the active assistant bubble in real time. Errors render as red error bubbles.

Workspace awareness

Every prompt to the AI dock automatically carries the right context — Notepatra figures out what the model needs:

So the model can reference files you haven't opened yet — "import from utils.py" works even when utils.py isn't in a tab. The block is gated by intent so casual chat ("hi", "thanks") doesn't get spammed with workspace dump.

Persistent chat history (v0.1.39+)

Conversations survive app restart. Stored at:

~/.config/notepatra/chat-history/<sha1-of-workspace>.json

One file per workspace; switching workspaces loads the right history. Saves debounced 2 s, capped at 1 MB per workspace (oldest messages roll off). Reset deletes the on-disk file. Atomic write via .tmp + rename so a kill-9 mid-write never leaves a corrupt history.

Fix-intent detection (v0.1.40+)

Type fix my json / repair this html / the sql is broken in the chat input and the system prompt automatically swaps to a strict minimal-change patcher (same rules as Tools → JSON Tools → AI Fix). Models stop "improving" the input by adding fields, reordering keys, or restructuring.

Does NOT trigger on:

Implementation lives in src/ai_intent.{h,cpp}. 49 assertions in test_ai_intent cover positive intents (case-insensitive, mixed phrasing), negatives (explain / describe / teach / show / list / find / grep), and edge cases (multi-line, @file mention, generic "fix my code").

Coding Mode — agentic tool-using AI (v0.1.35+)

Tick Coding Mode in the AI dock and Notepatra becomes an agent: the model can read your files, list directories, search, write files, and apply line-level edits — all on its own. The dock flips into a 3-pane layout (file tree · editor · AI chat).

Bottom segmented toggle — Chat / Compose / Agent (v0.1.61+)

The old top-of-panel Chat | Composer tabs are gone. v0.1.61 introduced a 3-segment toggle at the bottom of the AI dock — matching the iOS / Slack keyboard-accessory mental model that Continue.dev, Copilot Chat, and Cursor 3.0 all converged on:

Per-hunk Stage / Revert from the editor gutter (v0.1.62+)

Click any green / red / blue change marker in the editor's git gutter (margin 3) and a hunk popup anchors at that line showing the before-vs-after content (an embedded DiffView) with three buttons:

The Compare widget (Tools → Compare Files) gets the same buttons in a docked strip — one row per contiguous diff region (Hunk N/M · rows a–b · [Stage] [Revert] [Jump →]).

Marker-based merge resolution (v0.1.62+)

Files with UU (both-modified) status in the Git panel now show a Resolve button instead of the +/− shortcuts. Clicking it opens the merge helper widget which scans column-0 <<<<<<< / ======= / >>>>>>> markers and surfaces Take ours / Take theirs / Take both / Jump → buttons per conflict region, plus QScintilla annotation labels above each conflict. Full 3-way LOCAL/BASE/REMOTE merge editor is v0.2 scope; the marker-based path covers ~90% of conflicts.

Vision drag-and-drop (v0.1.61+)

Drop an image (PNG / JPG / WEBP / GIF / BMP) or document (PDF / DOCX / PPTX) onto the AI dock. If the active model is not vision-capable, you get a styled error bubble listing alternatives — local qwen2.5vl:7b / gemma3:4b, cloud claude-sonnet-4-6 / gpt-5 / gemini-2.5-flash. Detection:

Smart input gating + context guards (v0.1.61+)

The chat input + Send button now disable whenever the model dropdown is in a placeholder state — (detecting…), (Ollama offline), (no models installed), (API key required). State-specific placeholders explain why with concrete next steps (ollama serve, ollama pull qwen2.5-coder:7b, click ⚙). Additional guards:

Agentic tools

The agent gets 5 native tools (v0.1.40 surface). Every tool call shows up as an inline 🔧 toolname (args) → result card in the chat:

ToolWhat it does
read_file(path, offset?, limit?, with_line_numbers?)Read a text file from the workspace. Default emits N\t line-number prefix per line so the model can reference exact lines. v0.1.40: pass with_line_numbers=false to get raw content (recommended when feeding lines into apply_diff old_lines).
list_dir(path)List one level of entries with type (file/dir) and size. Filters .git / node_modules / target / dist / __pycache__ / .gradle / .idea / .vs.
search(pattern, path?, regex?, glob?, case_sensitive?, max_matches?)Find a string or regex across the workspace. Returns up to 50 (default) / 200 (max) matches with file path, line, column, and a snippet. Same heavy-dir skip-list as the file tree.
write_file(path, content, mode?)Create or overwrite a text file. Modes: overwrite (default) / create (fails if exists, returns error_kind: exists) / append. Auto-creates parent directories inside the workspace; refuses paths matching the credential deny-list (.ssh, .pem, .key, id_rsa*, /etc/passwd, etc.). 5 MB content cap. After success the file auto-opens in a new tab — or the buffer reloads if it's already open.
apply_diff(path, hunks)Atomic line-level edits. Each hunk has old_start_line + old_lines (expected current text) + new_lines (replacement). Two-phase apply: validates ALL hunks against the live file first → if any drifted, returns error_kind: conflict and nothing is written. Otherwise hunks apply in reverse-line-order so earlier indices stay stable. v0.1.40 three-tier match: strict → strip read_file's N\t prefix from old_lines.trimmed() comparison. Relaxed tiers still apply but emit result.warnings so the agent self-corrects on the next read. Atomic write via .tmp + std::rename.

Three-layer path security

Every tool call goes through resolveSafePath:

  1. Workspace anchor + canonicalize. Relative paths resolve against the workspace root; absolute paths are accepted only if they canonicalize back inside it. Catches ../../../../etc/passwd and symlink-to-secrets attacks.
  2. Hardcoded deny-list. Refuses ~/.ssh/, *.pem, *.key, id_rsa*, /etc/passwd|shadow, ~/.gnupg/, ~/.aws/, ~/.netrc, ~/.npmrc, ~/.docker/config.json, etc. Applied to the candidate path itself, so creating ~/.ssh/foo via a mkpath chain is still refused.
  3. Structured errors. Refusals come back as error_kind: outside_workspace | denied | not_found | exists | binary | too_large | conflict | malformed_args | timed_out — the model knows what to fix.

Per-turn budget (Hard cap)

Hard cap of 25 tool calls per user turn to prevent runaway loops on confused models. When exhausted, the agent receives: "Tool-call budget exhausted (25 calls this turn). Stop and summarise what you've found."

Tool-call wire format

Notepatra uses each backend's native tool-call protocol:

v0.1.40 surfaces malformed tool-call JSON as a structured error_kind: malformed_args result back to the model — so a model that emits unescaped quotes in arguments gets a clear "re-emit with valid JSON" message + raw-args preview, instead of silently passing empty args downstream.

Models that work well in Coding Mode

Notepatra has a model allowlist for Ollama (other backends always send tools and trust the server's support detection). Confirmed working:

Models not recommended: phi-3-mini / phi-3.5 (no tool support), gemma-2-2b (sometimes hallucinates tool calls), tinyllama, llama 3 base.

Data Analyst Mode (v0.1.43+)

Tick the new Data checkbox in the AI dock header (next to Coding) and the AI assistant becomes a data analyst: it can query attached CSVs, run SQL against your saved database connections, and emit live charts inline in the chat. Mutually exclusive with Coding Mode so the panel stays focused.

What changes when Data Mode is on

The two new agentic tools

ToolWhat it does
csv_query(file_path, sql, max_rows?, max_load_rows?)Loads a workspace CSV into in-memory SQLite (table name csv, column names from the header) and runs your SQL. Default loads up to 250,000 rows; default returns up to 500. Lets the model ask "SELECT category, SUM(amount) FROM csv WHERE order_date >= '2026-04-01' GROUP BY category ORDER BY 2 DESC" instead of trying to mentally scan a million-line file.
query_sql(connection_name, sql, max_rows?, confirm?)Runs SQL against a saved connection. SELECT-only by default (also: WITH / EXPLAIN / PRAGMA / SHOW / DESCRIBE). INSERT / UPDATE / DELETE / DDL require confirm:true after the user explicitly approves — no implicit mutations. Caps results at 500 rows.

Live charts inline in the chat

v0.1.64+ note: charts now ship behind the optional charts pack. On a default (lite) install, the AI's generate_chart tool result paints as a 📊 Chart rendering requires the Charts Pack card with two buttons — [Install charts pack] (opens GitHub Releases with the matching -full asset for your OS) and [View JSON instead] (shows the raw Vega-Lite spec in a fenced code block). See Lite vs Full for the upgrade paths. The flow described below is what you see once you're on the Full flavor.

When a visualization clarifies the answer, the model emits a fenced ```chart block with a small JSON spec. Notepatra parses each one and embeds a real interactive QChartView under the assistant's prose:

```chart
{
  "type": "bar",
  "title": "Revenue by category, last 30 days",
  "x": "category",
  "y": "revenue",
  "data": [
    {"category": "electronics", "revenue": 284921.50},
    {"category": "books",       "revenue":  52183.25},
    {"category": "home",        "revenue":  98442.10}
  ]
}
```

Supported types: line, bar, pie, scatter. Theme-aware. Category-aware axes for string-valued X columns; numeric axes for numeric X. Malformed JSON falls back to displaying the spec as a code block — nothing breaks.

Database connection manager

Click Manage Connections… (visible only when Data Mode is on) to add a new connection. The dialog has a Preset dropdown at the top — pick one and the form fills with sensible defaults for that database type so you only edit what's specific to your server.

Available presets (v0.1.66+):

Under the form, a hint label paints in green when the Qt SQL plugin for the chosen driver is present (with a usage tip — e.g. "Named instances use the SQL Browser service — leave Port at default") or amber with the exact per-OS install commands when the plugin is missing.

Connections are saved to ~/.config/notepatra/db-connections.json (or platform equivalent). The dialog also has a Test button that opens + closes the connection without committing the record — green tick on success, red driver error on failure.

How to connect — step by step (v0.1.66+)

SQL Server — local Docker for testing

The fastest way to play with SQL Server is the bundled Docker harness:

# 1. Spin up MS SQL Server 2022 + seed a NotepatraTest database
bash scripts/sql-server-local-setup.sh

# 2. Linux only — install Microsoft's ODBC Driver 18:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo gpg --dearmor -o /usr/share/keyrings/microsoft-prod.gpg
curl https://packages.microsoft.com/config/ubuntu/24.04/prod.list | sudo tee /etc/apt/sources.list.d/mssql-release.list
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install -y msodbcsql18 unixodbc-dev libqt5sql5-odbc

# 3. Restart Notepatra. The 'sql-server-local' connection is already
#    registered — open Data Mode → Manage Connections to see it.

# Tear down later:
bash scripts/sql-server-local-setup.sh --teardown      # stop + remove container
bash scripts/sql-server-local-setup.sh --wipe          # also drop the data volume

The script creates a NotepatraTest database with customers, products, and orders tables already populated, plus a sql-server-local entry in your db-connections.json. After installing the ODBC driver, you can ask the Data Analyst:

SQL Server — connecting to an existing server (Windows / macOS / Linux)

  1. In Manage Connections, pick the SQL Server (localhost, ODBC) preset to pre-fill the form.
  2. Change Host to your server's address. For named instances (e.g. SQL Express): SERVER\INSTANCE and set Port to default.
  3. For SQL Authentication: enter your SQL login + password.
  4. For Windows Authentication (domain account / current Windows user, Windows only): clear Username, leave Password empty, and add ;Trusted_Connection=yes to Options.
  5. For TLS-required servers (Azure, modern on-prem): change Options to DRIVER={ODBC Driver 18 for SQL Server};Encrypt=yes;TrustServerCertificate=no.
  6. Click Test. If green, click Save Changes then OK.

If Test fails with "data source name not found and no default driver specified", your DRIVER={…} string doesn't match a driver actually installed. Run odbcinst -q -d (Linux/macOS) or ODBC Data Source Administrator (Windows) to see which driver names are available — use the exact name in your Options string.

PostgreSQL — local + managed (RDS / Cloud SQL / Neon / Supabase)

  1. Pick the PostgreSQL (localhost) preset.
  2. For a local Postgres, leave everything at the defaults and enter your password.
  3. For a remote / managed Postgres, set Host to your endpoint (e.g. myapp.abc123.us-east-1.rds.amazonaws.com), Database to your specific DB name (RDS doesn't accept postgres as default).
  4. For TLS-required servers (any managed PG should require TLS), set Options to sslmode=require (or sslmode=verify-full;sslrootcert=/path/to/ca.pem for cert pinning).
  5. Unix-socket connections (lower latency on the same host): leave Host empty, set Options to host=/var/run/postgresql.

MySQL / MariaDB

  1. Pick the MySQL / MariaDB (localhost) preset.
  2. For local servers, defaults are fine — set your password and the database name.
  3. For RDS / PlanetScale / managed MySQL, set Options to SSL_CA=/etc/ssl/certs/ca-certificates.crt;MYSQL_OPT_CONNECT_TIMEOUT=10.
  4. For Unix-socket: clear Host, add UNIX_SOCKET=/var/run/mysqld/mysqld.sock to Options.

SQLite — file-based

  1. Pick the SQLite (file on disk) preset — focus jumps to Database.
  2. Click Browse… and pick the .db / .sqlite / .sqlite3 file (or type a path — the file is created if it doesn't exist).
  3. No host / port / username / password — SQLite is file-based.

DuckDB — files, S3, in-memory

  1. Pick the DuckDB (file or :memory:) preset — Database defaults to :memory:.
  2. The Database field is multi-mode — point it at:
    • :memory: — ephemeral DB
    • /path/to.duckdb — persistent DuckDB file
    • /path/to.csv, /path/to.parquet, /path/to.json — DuckDB reads them directly (no import step)
    • s3://bucket/key.parquet — S3 via DuckDB's httpfs extension; fill Options with region;access_key_id;secret;session_token
  3. No host / port / username / password — DuckDB runs in-process via the native libduckdb engine.

Driver availability per OS

DriverLinux (Debian/Ubuntu)macOS (Homebrew)Windows
QSQLITEBuilt into Qt — always availableBuilt into Qt — always availableBuilt into Qt — always available
QPSQL (PostgreSQL)sudo apt-get install libqt5sql5-psqlbundled in brew install qt@5Bundled with the Notepatra Windows release
QMYSQL (MySQL/MariaDB)sudo apt-get install libqt5sql5-mysqlbundled in brew install qt@5Bundled with the Notepatra Windows release
QODBC (SQL Server)sudo apt-get install libqt5sql5-odbc unixodbc-dev msodbcsql18 (see MS install guide for the package repo setup)brew tap microsoft/mssql-release && HOMEBREW_ACCEPT_EULA=Y brew install msodbcsql18 unixodbcBundled with the Notepatra Windows release (Microsoft ships ODBC drivers with Windows itself)
DUCKDBBundled in the Notepatra Linux release (vendored libduckdb.so)Bundled in the Notepatra macOS releaseBundled in the Notepatra Windows release

Troubleshooting

Browse Schemas… (Database tree dialog, v0.1.55)

Click Browse Schemas… in the Data Analyst panel to open a tree view of every saved connection. Schema introspection is lazy — clicking a connection node loads its schemas; clicking a schema loads its tables; clicking a table loads its columns + types. Supports SQLite (via sqlite_master), DuckDB (via the native listTables / describeTable wrappers), and any database with INFORMATION_SCHEMA (PostgreSQL, MySQL, SQL Server). Live filter input narrows the tree by name. Right-click any node:

Honest limitation: connection passwords are obscured at rest (XOR + base64), NOT real encryption. This is "don't show plaintext to people walking past my screen", NOT "survives a stolen laptop." For production secrets, use OS keychain / .pgpass / instance-role IAM. OS-keychain integration is a future candidate.

Project-level data context: .notepatra/data-analyst/ (v0.1.55)

Drop one or more markdown / SQL files into a .notepatra/data-analyst/ directory in your workspace and Notepatra auto-prepends their contents to the system prompt as a "Project data context" layer when Data Mode is on. Per-workspace, version-controllable. Conventions:

Caps: 64 KB total across all files, 16 KB per file. The Welcome card surfaces the loaded-files count so you know context was picked up. The legacy single-file .notepatra/data-analyst.md is also still supported.

- The `orders` table joins `customers` on `customer_id`.
- Treat NULL in `amount` as 0 (legacy import bug, never backfilled).
- `rate` is a percentage stored 0–1, not 0–100. Don't average it.
- Q1 = Jan-Mar, fiscal year matches calendar.

Then ask "show me last quarter's top-revenue customers" and the model already knows the schema and quirks.

Model capability gating

Multi-table SQL and chart-spec emission are harder than read_file. AiTools::modelCapableOfDataAnalysis() allowlists frontier cloud models (Claude 4.x, GPT-4 / 5, Gemini 2.x, DeepSeek-V3) and local models ≥7B params from strong families (qwen2.5-coder, llama3.x, mistral-large). When you toggle Data Mode on with a model below the bar (e.g. llama3.2:1b), the panel shows an inline orange banner suggesting capable alternatives. The mode still works — the banner is the heads-up, not a hard block.

What's NOT in v0.1.43 (deferred)

Thinking models

Qwen3 / DeepSeek-R1 and other "thinking" models emit <think>...</think> reasoning blocks before the actual answer. For JSON Tools, this breaks the parser (the thinking block isn't valid JSON). Notepatra's cleanup pipeline strips <think> tags three ways:

  1. Passes think: false in the /api/generate request body (honored by modern Ollama)
  2. Appends /no_think to the system prompt (honored by some models as a slash-command)
  3. Regex-strips <think>...</think> from the final response as a defensive backup

For the AI Assistant chat, you can toggle Show thinking to see the reasoning if you're curious.

Vision models & file attachments

The 📎 attach button accepts any file type. What happens depends on the kind:

File kindHandled how
🖼 Images (png/jpg/webp/gif/bmp)Loaded as QImage, downscaled to max 1280px, re-encoded as PNG, base64-encoded, passed in the images field of /api/generate. Vision models (llava, llama3.2-vision, qwen2-vl, moondream, granite-vision) actually see the image. Non-vision models silently ignore the field.
📕 PDFpdftotext -layout via QProcess (requires poppler-utils), extracted text appended to the prompt as context, capped at 100 KB.
📘 DOCX / ODTunzip -p file.docx word/document.xml, XML tags stripped, appended to prompt.
📙 PPTXunzip -p file.pptx ppt/slides/slide1.xml, XML stripped, appended (first slide only for now).
📗 XLSXunzip -p file.xlsx xl/sharedStrings.xml, appended.
📄 Text / code (*.txt / *.md / *.json / *.py / *.cpp / *.js / ...)Read raw as UTF-8, capped at 100 KB, appended as context.
Anything elseAttempted as text (UTF-8).

Keyboard shortcuts

File

Ctrl+NNew
Ctrl+OOpen
Ctrl+SSave
Ctrl+Shift+SSave All
Ctrl+WClose tab
Ctrl+Shift+TReopen last closed tab
Ctrl+PPrint

Edit

Ctrl+ZUndo
Ctrl+Y / Ctrl+Shift+ZRedo
Ctrl+X / Ctrl+C / Ctrl+VCut / Copy / Paste
Ctrl+ASelect all
Ctrl+DDuplicate line
Ctrl+Shift+KDelete line
Ctrl+/Toggle line comment
Ctrl+Shift+UUPPERCASE selection
Ctrl+Ulowercase selection
Alt+↑ / Alt+↓Move line up / down

Search

Ctrl+FFind
Ctrl+HReplace
F3 / Shift+F3Find next / previous
Ctrl+GGo to line
Ctrl+BGo to matching brace (swivels between open/close)
Ctrl+F2Toggle bookmark
F2Next bookmark
Shift+F2Previous bookmark

View / Navigation

Ctrl+= / Ctrl+-Zoom in / out
Ctrl+0Reset zoom
Ctrl+TabNext tab
Ctrl+Shift+TabPrevious tab
Ctrl+Shift+EToggle File Explorer sidebar
Ctrl+Shift+AToggle AI Assistant dock
Ctrl+Shift+GOpen Project Search
F11Full screen

Macro

Ctrl+Shift+RStart / stop macro recording
Ctrl+Shift+PPlay last macro

Command-line flags

notepatra [options] [file1] [file2] ...

Options:
  -h, --help       Show help and exit
  -v, --version    Show version and exit
  -n, --new        Open a new window, don't restore session
  --line N         Go to line N in the first file
  --theme NAME     Use theme: Light, Dark, Monokai

Examples:
  notepatra                       Open with last session restored
  notepatra file.py               Open a file
  notepatra --line 42 file.py     Open at line 42
  notepatra --theme Dark          Start in dark mode
  notepatra *.json                Open multiple files

Config file layout

All persistent state lives under:

OSPath
Linux / macOS~/.config/notepatra/
Windows%LOCALAPPDATA%\Notepatra\

Files inside that directory

Build from source

Dependencies

One-liner (Linux/macOS)

git clone https://github.com/singhpratech/notepatra.git
cd notepatra
./build.sh

Add --tests to also build and run the regression suite via CTest.

Manual CMake

cd notepatra
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=ON
cmake --build . -j$(nproc)
./notepatra --version

Windows (PowerShell, MSVC)

# 1. Install Qt5 via aqtinstall or install-qt-action
# 2. Build QScintilla from Riverbank source via qmake + nmake
# 3. Then:
cd notepatra
mkdir build; cd build
cmake .. -G "Visual Studio 17 2022" -A x64
cmake --build . --config Release

See .github/workflows/build.yml for the exact CI command sequence.

Verifying releases

Every release ships with:

SHA-256

curl -sL -O https://github.com/singhpratech/notepatra/releases/latest/download/SHA256SUMS
sha256sum -c SHA256SUMS --ignore-missing

Cosign verify

cosign verify-blob \
  --certificate-identity-regexp '^https://github.com/singhpratech/notepatra/' \
  --certificate-oidc-issuer 'https://token.actions.githubusercontent.com' \
  --certificate notepatra-linux-x64.tar.gz.pem \
  --signature  notepatra-linux-x64.tar.gz.sig \
  notepatra-linux-x64.tar.gz

SLSA attestation

gh attestation verify notepatra-linux-x64.tar.gz --owner singhpratech

Full threat model and disclosure policy in SECURITY.md.

Troubleshooting

Notepatra opens but no text renders / white-on-white

Likely a lexer palette bug. Try Settings → Theme → Light then back to your theme. If it persists, file a bug with your OS + theme + a screenshot.

AI Fix doesn't do anything

  1. Check the Ollama Status bar at the top of the JSON Tools panel — is the dot green?
  2. If red: ollama serve in a terminal, check port 11434.
  3. If green but no response: make sure you've pulled at least one model: ollama list.
  4. Check Notepatra's stderr for the streaming tokens / error.

Windows: "This program can't start because ... was not found"

The NSIS installer bundles Qt DLLs, but the portable .zip depends on the Visual C++ Redistributable. Install the latest VC++ Redistributable and relaunch.

macOS: "Notepatra can't be opened because it is from an unidentified developer"

macOS Gatekeeper. Right-click the .app, choose Open, then Open again in the warning dialog. Or run xattr -cr Notepatra.app to clear the quarantine attribute.

Linux ARM64 binary doesn't launch

Make sure the file is executable: chmod +x notepatra. Check Qt5 is installed for your architecture.

FAQ

Is Notepatra a port of another editor?

No. Notepatra is its own editor — written from scratch in C++17 with a Rust core for hot paths (mmap I/O, Aho-Corasick search, Myers diff, formatters). It runs natively on Linux x64 / ARM64, macOS Apple Silicon, and Windows x64 from a single codebase.

How big is the binary?

The bare executable is ~9 MB stripped on each platform. Installed footprint ranges 5 MB on Linux (Qt comes from your distro) to ~85 MB on Windows (Qt5 DLLs + QScintilla DLL bundled in the MSI). Truly tiny — and zero browser runtime.

Does my code go to the cloud?

By default, no — Notepatra is local-first. The outbound connections are: (1) your selected local AI backend (Ollama on localhost:11434, llama-server, LM Studio, etc.) when you use AI features, (2) git push/pull when you click the Git panel buttons, (3) the REST client when you send a request, (4) a single GitHub-API call on launch when the auto-updater checks for new releases. Cloud AI backends (OpenAI / OpenRouter / Anthropic via OpenRouter / Gemini via OpenRouter) are opt-in via the AI dock backend dropdown — your code only leaves the machine when you explicitly point Notepatra at one. No analytics, no telemetry. Verifiable with strace.

Why GPL-3.0?

Because the source is open, modifications should stay open. If you embed Notepatra in a product, the product has to be GPL-compatible. If that's a problem, reach out — alternative licensing can be discussed.

Will you add LSP support?

Planned for a future milestone. The goal is to plug into clangd, rust-analyzer, pyright, etc. for proper autocomplete and go-to-definition. The Scintilla autocomplete today is word-based.

Where's the Windows ARM64 build?

Not yet — Windows ARM64 is a small but growing market. Tracked on the roadmap.

Contributing

See CONTRIBUTING.md for the full contributor guide. Short version:

Security policy

See SECURITY.md and /.well-known/security.txt. Reports via GitHub private vulnerability reporting. 90-day disclosure window by default.

License

Notepatra is licensed under the GNU General Public License v3.0. No warranty. See GPL-3.0 §15 and §16 for the disclaimer of warranty and limitation of liability.