Notepatra Documentation
A native C++/Rust code editor with local-first AI — six backends in the dropdown (Ollama, llama.cpp / Ollama Cloud, OpenRouter, OpenAI, Azure OpenAI), plus any OpenAI-compatible server (LM Studio, Jan, vLLM, KoboldCpp, llamafile, text-generation-webui) reachable via the llama.cpp entry. Fast, portable, free forever under GPL-3.0. This is the complete reference for installing, using, customizing, and extending Notepatra.
Introduction
Notepatra is a native code editor written in C++17 with a Rust core for the heavy-lifting (memory-mapped file I/O, Aho-Corasick search, Myers diff, formatters). It targets Linux x64 / ARM64, macOS Apple Silicon, and Windows x64. The bare executable is ~9 MB stripped on each platform; downloads range 3.0 MB (Linux ARM64) to ~39.9 MB (Windows MSI with Qt DLLs bundled).
Unlike Electron-based editors, Notepatra does not ship a browser runtime. It uses Qt5 + QScintilla — a battle-tested Scintilla-based editor engine — so 226 file types, 92 language lexers, brace matching, folding, and code completion are built in.
Unlike cloud-first editors, Notepatra's AI is local-first. The AI dock dropdown ships six backends as of v0.1.55: Ollama (default · localhost:11434), llama.cpp's llama-server (loads any GGUF directly · localhost:8080), OpenRouter (cloud · 100+ models — Claude, GPT, Gemini, Grok, Kimi, …), Ollama Cloud (cloud · gpt-oss:120b, qwen3-coder:480b, deepseek-v3.1:671b), OpenAI direct, and Azure OpenAI (enterprise). To use any other OpenAI-compatible server (LM Studio, Jan, vLLM, KoboldCpp, llamafile, text-generation-webui), pick the llama.cpp entry and set the base URL in Settings → Preferences → AI. Nothing leaves your computer unless you pick a cloud backend. No mandatory API keys, no subscription, no telemetry.
The AI runs as an agent. Tick Coding Mode in the AI dock and the model gets 5 native tools — read_file, list_dir, search, write_file, apply_diff — plus three-layer path safety, a 25-call hard cap per turn, and persistent per-workspace chat history. See Coding Mode (agentic) below for the full surface.
Install
Quick install (recommended)
Linux
curl -sL https://notepatra.org/install.sh | bash
Downloads the latest release for your architecture (x64 or ARM64), verifies SHA-256, installs to ~/.local/bin/notepatra, creates a .desktop entry, and registers the hicolor icon theme. Rerun to upgrade.
macOS (Apple Silicon only)
curl -sL https://notepatra.org/install.sh | bash
Same script. Downloads .dmg, mounts it, copies Notepatra.app to /Applications, unmounts. Intel Macs are not shipped pre-built — build from source.
Windows (PowerShell)
irm https://notepatra.org/install.ps1 | iex
Downloads the latest notepatra-setup-<version>.exe, runs the NSIS installer which registers in Settings → Apps → Installed apps, creates Start Menu + optional Desktop shortcut, optionally adds to PATH. Uninstall via Control Panel works.
Manual download
Every release is at github.com/singhpratech/notepatra/releases/latest.
| Platform | Asset | Size |
|---|---|---|
| 🐧 Linux x64 | notepatra-linux-x64.tar.gz | ~3.2 MB |
| 🐧 Linux ARM64 | notepatra-linux-arm64.tar.gz | ~3.0 MB |
| 🍎 macOS Apple Silicon | notepatra-macos-arm64.dmg | ~26.6 MB |
| 🪟 Windows x64 (MSI) | notepatra-<version>.msi | ~39.9 MB |
| 🪟 Windows x64 (installer) | notepatra-setup-<version>.exe | ~32.0 MB |
| 🪟 Windows x64 (portable) | notepatra-windows-x64.zip | ~36.5 MB |
sudo apt install qtbase5-dev libqscintilla2-qt5-dev. On Fedora: sudo dnf install qt5-qtbase qscintilla-qt5. On Arch: sudo pacman -S qt5-base qscintilla-qt5. Mac/Windows downloads bundle Qt.
Uninstall
Notepatra installs cleanly and uninstalls cleanly. Pick your platform and the path you used to install. Every command below is copy-paste ready.
ollama CLI.
🍎 macOS (Sonoma · Sequoia · Tahoe)
One-liner
curl -fsSL https://notepatra.org/uninstall.sh | sh
Auto-detects macOS and removes the app, CLI symlink, and user data. Uses sudo if /Applications is not writable.
Manual (any install path)
# Remove the app (sudo only if /Applications is not writable)
sudo rm -rf /Applications/Notepatra.app
# …or if you installed to ~/Applications instead:
rm -rf ~/Applications/Notepatra.app
# Remove the CLI symlink
rm -f ~/.local/bin/notepatra
# Remove user config + cache + saved state
rm -rf ~/.config/notepatra
rm -rf ~/Library/Preferences/com.notepatra.editor.plist
rm -rf ~/Library/Saved\ Application\ State/com.notepatra.editor.savedState
rm -rf ~/Library/Caches/com.notepatra.editor
What Notepatra does NOT create
- No system-wide
LaunchAgents/LaunchDaemons - No Homebrew, MacPorts, or other package-manager entries
- No kernel extensions
- No background daemons
🐧 Linux (Ubuntu · Debian · Fedora · Arch · any distro)
One-liner
curl -fsSL https://notepatra.org/uninstall.sh | sh
Manual (install.sh path)
# Remove the binary
rm -f ~/.local/bin/notepatra
# Remove the desktop entry
rm -f ~/.local/share/applications/notepatra.desktop
# Remove icons (all sizes)
for sz in 16 32 48 64 128 256; do
rm -f ~/.local/share/icons/hicolor/${sz}x${sz}/apps/notepatra.png
done
gtk-update-icon-cache ~/.local/share/icons/hicolor 2>/dev/null
# Remove user config + cache + recovery
rm -rf ~/.config/notepatra
# Refresh the application menu so the launcher disappears
update-desktop-database ~/.local/share/applications 2>/dev/null
Manual (tarball extracted somewhere else)
# Find where you put it
which notepatra
# /opt/notepatra/notepatra ← example
# Remove that directory
sudo rm -rf /opt/notepatra
rm -f ~/.local/share/applications/notepatra.desktop
rm -rf ~/.config/notepatra
If you built from source
cd /path/to/notepatra-build
sudo cmake --build . --target uninstall # if you ran `make install`
# Or just delete the source tree:
cd .. && rm -rf notepatra
What is NOT touched
- System Qt5 and QScintilla packages — shared with other apps, never auto-removed
- Ollama — separate program, manage via the
ollamaCLI - Files you opened or edited with Notepatra
🪟 Windows 10 · 11
If you installed via the NSIS installer (notepatra-setup-X.Y.Z.exe)
Easiest: use Windows Settings.
- Open Settings (Win+I)
- Apps → Installed apps
- Search for Notepatra
- Three-dot menu → Uninstall
- The NSIS uninstaller runs → Next → Uninstall
Alternative: run the uninstaller directly from %LOCALAPPDATA%\Notepatra\uninstall.exe.
The uninstaller automatically removes:
%LOCALAPPDATA%\Notepatra\(the entire install directory + Qt DLLs)- Start Menu shortcut
- Desktop shortcut (if you opted in)
- PATH entry (if you opted in)
HKCU\Software\Microsoft\Windows\CurrentVersion\Uninstall\Notepatra
One-liner (PowerShell)
irm https://notepatra.org/uninstall.ps1 | iex
Manual (PowerShell)
# Remove install directory (Qt DLLs + .exe + uninstall.ps1)
Remove-Item -Recurse -Force "$env:LOCALAPPDATA\Notepatra"
# Remove Start Menu + Desktop shortcuts
Remove-Item -Force "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Notepatra.lnk"
Remove-Item -Force "$env:USERPROFILE\Desktop\Notepatra.lnk"
# Remove from user PATH
$p = [Environment]::GetEnvironmentVariable("PATH", "User") -split ';' |
Where-Object { $_ -notlike "*Notepatra*" } | Where-Object { $_ }
[Environment]::SetEnvironmentVariable("PATH", ($p -join ';'), "User")
# Remove the Installed Apps registry entry
Remove-Item -Recurse -Force "HKCU:\Software\Microsoft\Windows\CurrentVersion\Uninstall\Notepatra"
# Remove user config + saved state
Remove-Item -Recurse -Force "$env:USERPROFILE\.config\notepatra"
Remove-Item -Recurse -Force "$env:APPDATA\Notepatra"
Portable zip
If you ran notepatra-windows-x64.zip, no installer was run — just delete the folder you extracted.
Remove-Item -Recurse -Force C:\path\to\notepatra-folder
What is NOT touched
- Visual C++ Redistributables (used by other apps)
- Files you opened or edited with Notepatra
- Ollama installation
First launch
Launch Notepatra from your start menu / Launchpad / notepatra command. You'll see an empty editor with the menu bar, toolbar, tab bar, status bar, and (collapsed) side panels.
The first time you run Notepatra, it creates ~/.config/notepatra/ (Linux/macOS) or %LOCALAPPDATA%\Notepatra\ (Windows) and writes a default config.json. Themes, fonts, tab width, word wrap, etc. are persisted there.
Lite mode vs Full mode (v0.1.64+)
Starting in v0.1.64, Notepatra ships two build flavors per release. Lite is the default download — a bare ~9 MB binary with no heavy dependencies bundled. Full is the opt-in flavor that includes QtWebEngine for inline Vega-Lite chart rendering (≈ 95 MB extra). Heavy features go through packs that you install on demand, the way DuckDB ships a tiny core plus optional extensions (httpfs, parquet, spatial, etc.).
Why this split?
v0.1.63 bundled QtWebEngine into every release for one specific feature — the AI Data Analyst's inline chart card. Most users will never trigger a chart, but every install paid the disk + download cost. v0.1.64 flipped the default: lite by default, charts move to an optional pack. Same architecture mirrors how Cursor / VS Code / Sublime add language servers and renderers — small core, install what you need.
| Flavor | Bare binary | What you get | What's missing |
|---|---|---|---|
| Lite (default) | ~9 MB stripped | Everything: editor, lexers, AI Assistant, Coding Mode, Data Analyst tool calls, Git, REST client, hex editor, plugins, themes. | Inline Vega-Lite chart rendering. The Data Analyst can still generate chart specs — they show as a "Charts Pack required" card with [View JSON] to copy the spec out. |
| Full (opt-in) | ~95 MB bundled | Everything in Lite, plus the QtWebEngine-backed chart renderer that paints Vega-Lite specs inline in the chat transcript. | — |
Packs — what's installed where
The plugin loader (src/plugin_loader.{h,cpp}) tracks which packs are available. Each pack lives under your per-user plugin directory:
- Linux:
~/.local/share/notepatra/plugins/<pack>/ - macOS:
~/Library/Application Support/notepatra/plugins/<pack>/ - Windows:
%APPDATA%/notepatra/plugins/<pack>/
v0.1.66 ships with the charts pack bundled into the Full flavor (it's the same binary — the pack is "installed" if WebEngine was linked at compile time). v0.1.66+ adds in-app download / SHA-256 verification / runtime QPluginLoader activation so packs can be added without swapping binaries.
| Pack ID | Size | What it does | Status |
|---|---|---|---|
charts |
≈ 95 MB | Renders Vega-Lite charts (bar / line / scatter / area / composite) inline in the chat transcript. Powered by QtWebEngine + vega-embed. | Bundled in the Full flavor as of v0.1.66 (Linux x64 / Linux ARM64). macOS / Windows Full builds land in v0.1.66. |
pdf |
≈ 12 MB | Rasterises PDFs to PNG pages so vision-capable AI models can read them. Powered by Poppler-Qt5. | Planned for v0.1.67. Until then, dragging a PDF into the AI dock shows the "use a vision-capable model" error bubble. |
Upgrading from Lite to Full
Three paths, pick whichever matches your situation:
Path A — Click [Install charts pack] in the app
When you ask the AI Assistant to chart something in Data Analyst mode, the lite binary shows the 📊 Chart rendering requires the Charts Pack card with two buttons:
- [Install charts pack] — opens a dialog explaining the lite/full split, then opens your default browser to the GitHub Releases page for the current tag. Pick the file ending in
-fullfor your OS. - [View JSON instead] — appends the raw Vega-Lite spec under the card as a fenced code block. You can copy it into any standalone Vega-Lite editor (vega.github.io/editor) without installing anything.
Path B — Download the Full flavor manually
On the GitHub Releases page pick the file with -full in the name:
- Linux x64:
notepatra-linux-x64-full.tar.gz— extract it, then replace~/.local/bin/notepatra(or wherever you installed). Your config, chat history, and connections live outside the binary and are preserved. - Linux ARM64:
notepatra-linux-arm64-full.tar.gz— same pattern. - macOS: Full DMG ships in v0.1.66. Until then, use Path C.
- Windows: Full installer ships in v0.1.66. Until then, use Path C.
Path C — Build from source with -DNOTEPATRA_WITH_WEBENGINE=ON
Works on any platform with QtWebEngine dev headers available:
- Linux:
sudo apt-get install qtwebengine5-dev libqt5webenginewidgets5(Debian / Ubuntu) orsudo dnf install qt5-qtwebengine-devel(Fedora) orsudo pacman -S qt5-webengine(Arch) - macOS:
brew install qt@5already bundles WebEngine - Windows: The install-qt-action equivalent (or aqtinstall)
qtwebenginemodule
Then:
git clone https://github.com/singhpratech/notepatra.git
cd notepatra
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DNOTEPATRA_WITH_WEBENGINE=ON
make -j$(nproc)
./notepatra
The binary will report a normal version string but the chart renderer's WebEngine code path is now compiled in, so the Data Analyst's chart cards render inline instead of showing the install prompt.
FAQ
- Do I lose anything by staying on Lite? No — every editor / lexer / AI / Git / data / plugin feature works identically. The only difference is that
generate_charttool results paint as a "Charts Pack required" card instead of as an inline rendered chart. You can still see the spec via [View JSON instead] and copy it elsewhere. - Can I downgrade Full → Lite? Yes. Just download the Lite tarball and replace the binary. Your config / chat history / connections are preserved (they live under
~/.config/notepatra, not inside the binary). - Will the chart spec data be lost if I'm on Lite? No — the AI's tool result is preserved in your chat history just like any other tool result. If you upgrade to Full later, opening the same chat shows the rendered chart for those past results.
- When does the in-app installer ship? v0.1.66 candidate. The architectural pieces (plugin_loader, manifest schema, download path) are scaffolded in v0.1.66; what's left is the actual HTTP fetch + SHA-256 verify +
QPluginLoader::loadwiring, plus testing across macOS / Windows runners.
UI overview
- Menu bar — File, Edit, Search, View, Features, Encoding, Language, Settings, Tools, Plugins, Macro, Run, Window, Help. Most actions have keyboard shortcuts (full list).
- Toolbar — quick-access buttons for New, Open, Save, Save All, Close, Cut, Copy, Paste, Undo, Redo, Find, Replace, Zoom In/Out, Word Wrap, Show All Characters, Function List, Workspace.
- Tab bar — one tab per open file. Modified tabs show a dot. Close with the
×or middle-click. - Editor area — Scintilla-based editor with line numbers, code folding, brace matching, syntax highlighting, caret-line highlight, word-wrap toggle, EOL visibility toggle, rulers, optional crosshair.
- Status bar — language, file length, line count, word count, cursor line/col, EOL type (CRLF/LF), encoding, INS/OVR.
- Side panels — File Explorer, Function List, Search Results, AI Assistant. Dockable/floatable.
Editor basics
The editor is QScintilla 2.14.1 (official Riverbank source) under the hood: folding, brace matching, auto-indent, bookmarks, multi-cursor (hold Alt), rectangular selection, code completion, and snippets are all built in.
Opening & saving
- New Ctrl+N — empty untitled tab
- Open Ctrl+O — multi-select file dialog, opens each as a tab
- Save Ctrl+S — save current tab (prompts for path if untitled)
- Save All Ctrl+Shift+S — saves every modified tab with a known path
- Close Ctrl+W — close current tab (prompts to save if modified)
- Drag & drop files from the OS file manager into the editor — they open as tabs
File-size limits
Notepatra uses the Rust core's memory-mapped file I/O for large files. Limits:
- Up to 50 MB — full syntax highlighting, auto-completion, all features enabled
- 50 MB – 500 MB — syntax highlighting and auto-completion disabled for performance, still fully editable
- 500 MB – 2 GB — minimal features, read-mostly mode
- > 2 GB — refused; use a streaming tool like
less
Languages & lexers
Notepatra ships 92 language lexers as of v0.1.66 (covering 226 file extensions). The Language menu is split into a narrow two-tier layout — Common + SQL Dialects at the top, More Languages as an alphabetical submenu of the rest. Extensions are auto-detected from the filename.
Core (Qt-bundled): Python, JavaScript/TypeScript/JSX, CoffeeScript, C/C++, C#, D, Java/Kotlin, HTML/PHP, CSS/SCSS/LESS, XML, JSON, SQL (ANSI / T-SQL / PL/SQL / MySQL / PostgreSQL / SQLite), Bash, Batch, Ruby, Perl, Lua, TCL, Fortran, Matlab, Octave, IDL, NASM, MASM, Verilog, VHDL, TeX, PostScript, POV, Spice, AVS, Properties, PO, IntelHex, SRecord, Markdown, YAML, Diff, Pascal, CMake, Makefile.
v0.1.55 additions (32 dedicated lexers, keyword tables verified against official specs): Dart · Solidity · Zig · Vala · Hack · Julia · R · Protobuf · F# · HCL/Terraform · Thrift · GraphQL · GDScript · Nim · Cython · Mojo · Crystal · Elixir · Scala · Groovy · Apex · Jinja · Liquid · Twig · Dockerfile · Fish · Nushell · TOML · DotEnv · Gitignore · JSON5 · BibTeX. Each ships with comment / uncomment / block-comment syntax (Ctrl+Q / Ctrl+Shift+Q) and proper file-extension routing — .dart, .zig, .jl, .toml, .ex, Cargo.toml, Dockerfile, .gitignore, .env, .fish, .json5, .tf, etc. now point at their dedicated lexers instead of falling back to the closest-fit generic.
Themes & palette
Three themes ship: Light (default), Dark, and Monokai. Change via Settings → Theme. The theme is persisted in config.json.
Every theme applies a curated 9-hue palette: keywords blue bold (#0000FF), types violet (#8000FF), comments green italic (#008000), numbers orange (#FF8000), strings grey (#808080), operators navy bold (#000080), preprocessor brown (#804000), classes maroon (#7F0000), identifiers plain (default text). Dark theme uses Zenburn-derived hues (warm sand keywords, sage types, rose strings, olive operators, peach preprocessor) so each token kind is on a distinct hue arc — no more "all blue shades" effect. The palette is driven by lexer->description(i) — it walks every style slot in every lexer, matches substrings like "keyword", "comment", "string", and paints accordingly. Works across all 92 lexers without hard-coding.
Rulers & crosshair overlay
Optional visual aids for long files and precise cursor placement:
- Document rulers — thin horizontal + vertical ruler bands showing pixel offsets from the top-left corner. Toggle:
View → Document Rulers. - Crosshair overlay — a thin line extending from the caret position to both edges of the editor, making the current line + column unmistakable. Toggle:
View → Crosshair.
Find & replace
5-tab dialog: Find, Replace, Find in Files, Mark, Go to.
- Normal / Extended / Regex modes — extended supports
\n,\t,\xNNescapes; regex uses PCRE2. - Match case, Whole word, Wrap around, In selection checkboxes.
- Find in Files walks a directory recursively (respects glob patterns), results land in the Search Results panel — double-click to jump to line.
- Mark highlights every occurrence of a pattern without moving the caret.
- Find / Replace All across all open tabs — one click.
Project Search (Ctrl+Shift+G)
A separate, full-tab search dedicated to scanning an entire folder tree. Designed for the "where is this used across the whole project?" workflow — different from the inline Find & Replace dialog (which targets the current document or open tabs). Streams results live as files are scanned; safe on multi-GB log files.
What it searches
- File contents — every line of every text file under the selected folder. Skips binary files automatically (NUL-byte heuristic in the first 4 KB).
- File names — when Also match file names is ticked, files whose basename matches the query show up at the top of the results, separately from content matches.
- Heavy directory pruning —
.git,node_modules,build,target,dist,.venv,venv,__pycache__,.cache,.gradle,DerivedData,.idea,.vsare skipped at the directory level so a 10k-filenode_modulesdoesn't drown your results. - OneDrive / cloud-placeholder skip on Windows — files marked offline by cloud-storage providers are skipped (would otherwise hang the search waiting for a download).
Query modes
- Plain text (default) — substring match. Multi-word phrases work directly: search
import os(with the space) and you get every file that contains that exact phrase. No quoting needed. Internal whitespace is preserved; leading/trailing whitespace is auto-trimmed (so" import os "matches the same lines as"import os"). - Match case — toggle case-sensitive matching.
- Whole word — wraps the query in
\b...\bword boundaries. For multi-word phrases the boundaries land at the start/end of the phrase, soimport oswith whole-word ON matchesimport osbut notmyimport os.path. - Regex — Rust regex syntax (no PCRE backreferences). Powers patterns like
\bfn\s+\w+for "all Rust function definitions." - Also match file names — adds filename hits to the result list.
- Include binary files — off by default. Tick to also scan binaries (rare; useful for hex strings).
Engine
Plain-text searches under ~50 MB per file route through a Rust Aho-Corasick fast path (multiple patterns at once, sub-millisecond on small files). Larger files and regex queries use a streaming line-by-line scanner. Each match reports line:column (column is 1-based and counted in characters, not bytes — UTF-8 multi-byte sequences count as one).
Results
Results stream into a tree-grouped panel: one node per file with hits, expandable to per-line snippets. Double-click any hit to jump the editor to the exact line and column. Live counters show files scanned, matches so far, and elapsed time. Cancel the search any time — the worker thread bails on the next file boundary.
Macros
Record keyboard + mouse operations and play them back:
- Ctrl+Shift+R — start/stop recording
- Ctrl+Shift+P — play last macro
Macro → Run Multiple Times— play N timesMacro → Save— persist the macro as JSON to~/.config/notepatra/macros/Macro → Load— restore a saved macro
Uses QScintilla's built-in QsciMacro serialization.
Sessions & crash recovery
On close, Notepatra saves ~/.config/notepatra/session.json: list of open file paths, cursor positions, window geometry, maximized state, theme, active tab. On next launch, all tabs reopen at the same cursor positions.
Every 10 seconds, Notepatra writes unsaved buffers to ~/.config/notepatra/recovery/<crash-id>.txt. If the app crashes (caught by SIGSEGV/SIGABRT/SIGFPE handlers), the next launch detects .crash_flag and offers to restore unsaved work.
JSON Tools plugin
Open: Plugins → JSON Tools (inbuilt). Panel opens as a new tab.
Buttons
- Format — parses JSON, pretty-prints with 4-space indent via Rust's
serde_json. Falls through to a manual pretty printer for broken JSON. - Minify — strips all whitespace. Attempts to fix first, then minifies; if unparseable, strips whitespace with
replace(whitespace, ""). - Fix + Format — runs the deterministic Rust fixer (
RustCore::fixJson) which handles: single→double quote conversion, unquoted keys, missing closing braces/brackets (stack-based), trailing comma removal, nested-brace insertion via regex passes. Produces a human-readable report like"Fixed 2 issue(s): Added 3 missing closer(s) at end: ]}}; Removed 1 trailing comma(s)". - AI Fix (Ollama) — sends the broken JSON to your local Ollama model with a strict minimal-change prompt. See AI Fix pipeline for details.
- Show Diff — opens a side-by-side Compare tab with the original on the left and the last fix's output on the right. Works for any action: Format, Minify, Fix+Format, AI Fix.
- Copy Output — copies the current panel content to the clipboard.
Session log
Below the Scintilla output, a small list widget records every action taken during the session with before/after char counts, a delta, and a smart description:
[14:22:31] Format: 60 → 86 chars (+26) +2 commas, +1 brace
[14:22:45] AI Fix (Ollama): 60 → 63 chars (+3) +2 commas [5 lines]
[14:23:02] Minify: 98 → 60 chars (-38)
Color-coded: teal for "added/fixed", amber for "shrunk/minified", gray for no-op. Capped at 50 entries, scrollable.
HTML Tools plugin
Same structure as JSON Tools but with HTML-specific buttons: Format (2 spaces), Format (4 spaces), Minify, Fix + Format, AI Fix, Show Diff, Copy Output. Fixer handles: self-closing <img>/<br>, mismatched tags, attribute quotes.
Bracket Tools plugin
Generic bracket/quote balancer. Works on any language. Uses the Rust core's bracket_fix module which walks the input, tracks a bracket stack, and closes any unclosed opens at the end in reverse order.
SQL Formatter plugin
Open: Plugins → SQL Formatter (inbuilt).
- Dialect dropdown — ANSI SQL / T-SQL (SQL Server) / PL/SQL (Oracle) / MySQL / PostgreSQL / SQLite. Switching dialect re-colorizes the visible buffer with that dialect's keywords.
- UPPERCASE keywords checkbox — toggles keyword casing in the formatted output.
- Indent spin box — 1–8 spaces.
- Format button — invokes the Rust SQL formatter.
Dialect-specific keyword sets are fed to Scintilla via SCI_SETKEYWORDS (since QsciLexerSQL::setKeywords is protected). T-SQL adds DECLARE/MERGE/OUTPUT/PIVOT/OVER/PARTITION, PL/SQL adds PLS_INTEGER/SYSDATE/NVL/DECODE/CONNECT BY, etc.
Compare
A side-by-side diff view shipped under Plugins → Compare. Backed by Rust's Myers diff (similar crate) and a custom CompareWidget rendered with Qt.
Visual UX inspired by ComparePlus by Pavel Nedev. Notepatra's implementation is a fresh Qt + Rust port in a different codebase, but the visual conventions are credit to Pavel.
How it works
- Pick a left and right file (open tab, unsaved tab, or file on disk).
- Rust's Myers diff (
similarcrate) produces a list of Equal / Insert / Delete entries. - Consecutive Delete + Add blocks are paired into "modified" rows at the same visual line.
- Within paired modified rows, a common-prefix + common-suffix detector finds the exact differing bytes and highlights only those characters — not the whole line.
Visual markers
| Row kind | Background | Symbol margin | Character highlight |
|---|---|---|---|
| Equal / context | white | — | — |
| Modified (paired) | pale yellow #FFFBE6 | pink ~ (Circle) | red box on left, green box on right — only differing chars |
| Added (right only) | mint green #D4F4D4 | green + (Plus) | — |
| Deleted (left only) | salmon #F4D4D4 | red − (Minus) | — |
| Placeholder (empty on one side) | light blue #E8F0F8 | green + in margin | — |
Line numbers
Each panel uses a TextMargin with custom per-row text. The LEFT panel shows the original LEFT-source line numbers; the RIGHT panel shows the original RIGHT-source line numbers. They diverge cleanly when there are insertions/deletions — the empty placeholder on one side shows a green + instead of a number.
Toolbar
- < Prev / Next > — jump to previous/next diff
- Recompare — re-run the diff algorithm
- Ignore spaces — defaults to ON as of v0.1.36. Whitespace-only differences (re-indentation, alignment, trailing spaces) are collapsed so they don't show up as diffs. Untick for byte-exact mode (useful for YAML / Python where indentation IS the meaning).
- Ignore case, Ignore empty lines — toggle and recompute
Scroll sync
Both vertical and horizontal scrollbars are mirrored. Drag either side's scrollbar and both panels move together. Qt's built-in valueChanged signal is naturally cycle-safe because setValue(x) is a no-op when x == current.
Git integration
Opens as a dockable panel. Shows:
- Current branch name with upstream tracking status
- Changed files list — added (green), modified (yellow), deleted (red), untracked
- Git gutter in the editor margin — colored line markers for added/modified/deleted lines compared to HEAD
- Push / Pull / Refresh buttons
- Open on GitHub — opens the current file's blob URL in your browser
Shells out to git via QProcess. Non-invasive — no libgit2 dependency.
Terminal
Built-in terminal panel (Linux/macOS). Uses QProcess to spawn bash/zsh and feeds stdout/stderr to a monospace widget. Not a full pty (no colors, no curses) — fine for simple commands and scripts, not for vim/htop.
REST client
A Postman-lite panel: URL field, method dropdown (GET/POST/PUT/PATCH/DELETE/HEAD), headers editor, request body editor with JSON syntax highlighting, response viewer with pretty-printed JSON output + status code + timing.
Hex editor
Byte-level view of any file. Left column: offset. Middle: hex bytes in 16-byte rows. Right: printable ASCII. Read-only for now.
Markdown preview
Live side-by-side preview for .md/.markdown files. Uses Qt's QTextDocument::setMarkdown() — CommonMark-compatible. Scrolls in sync with the source.
AI integration overview
Notepatra's AI is local-first by default with optional cloud backends. As of v0.1.55, six backends are available — pick from the AI dock dropdown:
| Backend | Type | Protocol / URL | What it gives you |
|---|---|---|---|
| Ollama (default) | Local | Native /api/generate · http://localhost:11434 | Easiest to install. One-line install, auto-detects models with /api/tags. /api/show capability probe (v0.1.55) auto-detects which models support tools / thinking / vision. |
| Ollama Cloud | Cloud | OpenAI-compat over HTTPS · https://ollama.com | Same Ollama models, served from Ollama's hosted infrastructure. Per-provider key slot in Settings. |
| llama.cpp | Local | OpenAI /v1/chat/completions · http://localhost:8080 | Loads any GGUF file directly — no daemon, no config format. Maximum control + minimum overhead. |
| OpenRouter | Cloud | OpenAI-compat · https://openrouter.ai/api | One key, hundreds of models (Anthropic, OpenAI, Google, Mistral, Meta, xAI, DeepSeek, Qwen). Unified reasoning field — Think checkbox toggles thinking on Claude / o-series / Gemini consistently. |
| OpenAI | Cloud | OpenAI /v1/chat/completions · https://api.openai.com | Direct OpenAI access for GPT-4 family + o-series. Per-provider key slot. |
| Azure OpenAI | Cloud | Azure deployment URL with ?api-version=... | Enterprise OpenAI through your Azure subscription. Configure resource name + deployment name + API version in the dedicated Azure section of the Settings dialog. |
If the selected backend isn't reachable, AI panels show a backend-specific banner (e.g. "Ollama not running — start it: ollama serve", or for cloud backends "OpenRouter key not set — open Settings"). The cloud-config banner is red when keys are missing for the selected backend, green when configured.
Per-provider key slots (v0.1.55). The AI Settings dialog has a 4-section layout — OpenRouter, OpenAI, Ollama Cloud, Azure OpenAI — each with its own Test / Save / Forget buttons and key slot. Strict no-cross-provider lookup: the OpenRouter backend will never accidentally use the OpenAI key, and vice-versa. Legacy single-key configs migrate automatically by sniffing the prefix (sk-or- → OpenRouter, sk- → OpenAI).
Searchable model dropdown (v0.1.55). Type any provider key (openai, anthropic, google, xai) or alias (grok, claude, gpt, gemini, kimi, qwen) and the dropdown filters to that provider's models — works whether you remember the brand name or the model name.
AI surfaces in Notepatra:
- AI Fix buttons in JSON Tools / HTML Tools / Bracket Tools plugins — sends broken content, gets fixed content back
- AI Assistant panel — general-purpose chat with your code, supports multi-turn, attach files, voice input. Modes: Chat / Coding / Data Analyst.
- Status bar in every AI panel showing connectivity dot, the searchable model dropdown, and a refresh button
Privacy: "Share file with AI" toggle (v0.1.55). A red lock indicator next to the AI dock title controls whether the file currently in the editor is included in the AI's context. Default OFF. Available only in Coding Mode — Chat and Data Analyst modes never see file content. The system prompt teaches the model to politely instruct you to enable the toggle if asked "can you see this file?".
Credential scrubber (v0.1.55). Every editor-derived chunk (selection, workspace block, attached file, user input) passes through a 14-pattern redactor before leaving the machine — OpenRouter / Anthropic / OpenAI / GitHub / GitLab / AWS / Slack / Stripe / SendGrid / Google API / JWT / PEM private-key blocks / generic password=/api_key=/token=. Replacement is in-place with a [REDACTED-VENDOR-KEY] marker so the model still sees that a credential was redacted.
Ollama setup
- Install Ollama —
curl -fsSL https://ollama.com/install.sh | sh(Linux/macOS) or download from ollama.com (Windows). - Start the daemon —
ollama serve. Listens onhttp://localhost:11434by default. - Pull at least one model. Recommended starter:
ollama pull qwen2.5:7b— 4.7 GB, general-purpose, good at codeollama pull llama3.2:3b— 2 GB, smaller, fasterollama pull codellama:7b— 3.8 GB, code-specializedollama pull qwen2.5-coder:7b— 4.4 GB, code-specialized + recent
- Launch Notepatra → open any AI panel → the model dropdown auto-populates with whatever you pulled.
qwen2.5-coder:3b → qwen2.5:3b → gemma2:2b → gemma3:4b → llama3.2:3b.
llama.cpp setup (GGUF)
llama.cpp runs any .gguf file directly — no daemon, no config format. Best if you have specific GGUF models from Hugging Face and want maximum control.
- Install llama.cpp —
brew install llama.cpp(macOS) or build from the repo (Linux / Windows). - Download a GGUF model from huggingface.co. Try
Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf(~2 GB, excellent for code). - Run the server —
llama-server -m path/to/model.gguf --port 8080. Exposes OpenAI-compatible endpoints athttp://localhost:8080/v1/. - In Notepatra → Settings → Preferences → AI → pick "llama.cpp" → Save. Done.
AI backend setup — full guide
The condensed setup blocks above cover the happy path. This guide walks every supported backend through install, first-run, and the most common failure modes — per OS where the steps differ. Use it when something doesn't work or you're setting up a new machine.
Ollama (recommended for beginners)
One installer, auto-discovery of models, native HTTP API on port 11434. Works on every desktop OS Notepatra runs on.
🐧 Linux
- Install — one-liner provided by Ollama:
The installer drops a binary incurl -fsSL https://ollama.com/install.sh | sh/usr/local/bin/ollamaand registers a systemd unitollama.service. - Daemon — the installer starts and enables it. Check status:
If your distro has no systemd (Void, Artix, Alpine, WSL1), runsystemctl status ollama journalctl -u ollama -f # follow logsollama servein a terminal you keep open. - Pull a model:
ollama pull qwen2.5-coder:7b - Verify:
ollama list # should show the model curl http://localhost:11434/api/tags # JSON list of installed models - Notepatra config — open the AI dock, pick "Ollama" in the backend dropdown. The model dropdown auto-populates. No URL field to set;
http://localhost:11434is hard-wired.
🪟 Windows
- Install — download
OllamaSetup.exefrom ollama.com/download/windows and run it. Installs to%LOCALAPPDATA%\Programs\Ollamaand starts a tray icon at boot. - Daemon — the tray icon means it's running. Right-click the tray icon → "Quit Ollama" to stop. Re-launch from the Start menu.
- Pull a model — open PowerShell or cmd:
ollama pull qwen2.5-coder:7b - Verify:
ollama list curl http://localhost:11434/api/tags - Notepatra config — same as Linux: pick "Ollama" in the backend dropdown.
🍎 macOS
- Install — either the
.dmgfrom ollama.com/download/mac, or via Homebrew:brew install --cask ollama - Daemon — launching the Ollama app puts an icon in the menu bar and starts the server. Quit from the menu-bar icon to stop.
- Pull a model — open Terminal:
ollama pull qwen2.5-coder:7b - Verify:
ollama list curl http://localhost:11434/api/tags - Notepatra config — pick "Ollama" in the backend dropdown.
Troubleshooting
"Ollama not running" banner won't go away
Notepatra polls http://localhost:11434 and the banner stays red while the probe fails. Causes:
- Daemon not started —
systemctl start ollama(Linux) / launch the app (Win/macOS) /ollama serve(manual). - Wrong port — Ollama only binds
11434. If you setOLLAMA_HOST=0.0.0.0:11500or similar, Notepatra won't find it. Either unset that env var or restart Ollama on the default port. - Firewall — uncommon on localhost, but Windows Defender has been known to block first-time loopback connections. Allow
ollama.exewhen prompted. - Refresh — click the refresh button in the AI panel status bar to re-probe immediately instead of waiting for the next poll cycle.
Models don't appear in Notepatra dropdown
Ollama is reachable (banner is green) but the dropdown is empty or stale.
- Nothing pulled — run
ollama list. If empty, pull at least one model. - Pulled after Notepatra started — click the refresh button next to the model dropdown. Notepatra re-fetches
/api/tags. - Search filter active — the dropdown is searchable (v0.1.55). Clear the search text or type a fragment of the model name.
Model pulled but capability probe says no tools
v0.1.55 introduced an /api/show capability probe — Coding Mode hides models that can't call tools. If a model you expect to support tools is greyed out:
- Base / instruct distinction — only instruct-tuned models declare tool support.
qwen2.5:7bworks;qwen2.5-base:7bdoes not. - Old quant — re-pull the latest tag:
ollama pull qwen2.5-coder:7boverwrites the local copy with the current manifest, which may now include the tools capability flag. - Workaround — switch to Chat mode, which doesn't require tools, or pick a model from the curated list (Notepatra ships a known-good list per backend).
Out of memory on 7B+ models
Symptom: Ollama logs show llama_model_load: error loading model: failed to allocate buffer, or the daemon segfaults mid-generation.
- RAM budget — 7B Q4 needs ~6 GB free; 13B ~10 GB; 34B ~24 GB. Close browsers / VMs first.
- Pick a smaller quant —
ollama pull qwen2.5-coder:7b-instruct-q4_0uses ~1 GB less than the default Q4_K_M. - Drop to a 3B —
llama3.2:3b,qwen2.5-coder:3b, orgemma2:2ball run on 8 GB laptops. Notepatra auto-prefers these on memory-constrained machines. - GPU offload — set
OLLAMA_NUM_GPU=999to push all layers to VRAM if you have a discrete GPU.
llama.cpp (power users)
Maximum control. You pick the GGUF, you pick the quant, you pick the GPU layers. No daemon, no model registry — just llama-server with your file. Default port 8080, OpenAI-compatible at /v1.
🐧 Linux — build from source
- Clone and build:
Binaries land ingit clone https://github.com/ggml-org/llama.cpp cd llama.cpp cmake -B build -DGGML_CUDA=ON # or -DGGML_VULKAN=ON, or omit for CPU-only cmake --build build --config Release -jbuild/bin/. Add to PATH or copyllama-serversomewhere in your$PATH. - Verify:
llama-server --version
🪟 Windows — Releases zip
- Download the latest pre-built zip from github.com/ggml-org/llama.cpp/releases. Pick
llama-bXXXX-bin-win-cuda-x64.zipif you have an NVIDIA GPU, or...-vulkan-...for AMD/Intel, or...-avx2-...for CPU-only. - Extract to a folder you'll remember (e.g.
C:\llama.cpp) and add it to yourPATHenvironment variable, or always invoke with the full path. - Verify in PowerShell:
llama-server.exe --version
🍎 macOS — Homebrew
- Install:
Apple Silicon Macs get Metal acceleration by default — no extra flags needed.brew install llama.cpp - Verify:
llama-server --version
Picking a GGUF
GGUF is a quantised single-file model format. Smaller files run faster but lose accuracy. The sweet spot for 7B-13B models is Q4_K_M; pick higher only if you have spare VRAM.
| Quant | Size (7B) | Quality | When to pick |
|---|---|---|---|
Q4_K_M | ~4.4 GB | Good | Default. Fits on 8 GB RAM / 6 GB VRAM. Recommended. |
Q5_K_M | ~5.1 GB | Better | Noticeably sharper on coding. Needs ~10 GB free. |
Q8_0 | ~7.7 GB | Near-lossless | If you have 16 GB+ and want max fidelity short of full precision. |
F16 / BF16 | ~14 GB | Lossless | Research / fine-tuning. Overkill for chat. |
For ready-to-use quants, bartowski on Hugging Face publishes consistent re-quants of every popular release with all sizes side by side.
Running llama-server
Minimum command — just point at the GGUF:
llama-server -m /path/to/Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf --port 8080
Useful flags:
-c 8192— context window in tokens. Bigger = more RAM. Default 4096.-ngl 999— offload all layers to GPU (requires CUDA / Vulkan / Metal build).--host 0.0.0.0— bind to all interfaces. Default127.0.0.1(localhost only) is what Notepatra expects.--api-key SECRET— require an Authorization header. Leave off for local-only use.--jinja— use the model's chat template. Required for tool calls in Coding Mode.
In Notepatra, pick "llama.cpp" in the backend dropdown. The connectivity dot turns green and the model name (read from the running server) appears in the dropdown.
Troubleshooting
"llama-server not running" — but it IS running
Connectivity probe to http://localhost:8080/v1/models failed. Causes:
- Wrong port — Notepatra expects
8080. If you started--port 8000, restart with--port 8080. - Bound to a non-localhost interface — if you used
--host 192.168.x.x, Notepatra won't probe that. Use--host 127.0.0.1(default) or0.0.0.0. - Server still loading the model — large GGUFs take 30-60 s to mmap on first start. Watch the terminal until you see
HTTP server listening, then click refresh in Notepatra. - API key set on the server but not in Notepatra — if you started with
--api-key, paste the same key in Settings → AI → OpenAI-compat. Most users skip--api-keyentirely.
Curated catalog showing instead of my loaded model
The dropdown lists Notepatra's curated catalog rather than the model you loaded.
- Probe didn't reach
/v1/models— see "not running" above. The catalog is a fallback when the live probe fails. - Server returns an empty model list — older llama.cpp builds didn't return a name. Update llama.cpp; recent builds always populate
id. - You can still type any name — the dropdown is editable. Type the exact model id
llama-serverreports in its startup banner.
Cloud backends (OpenAI, OpenRouter, Anthropic, Azure, Ollama Cloud)
Cloud backends need a key. Notepatra has per-provider key slots — keys never cross-pollinate between providers. Open Settings → Preferences → AI to see the four sections (OpenRouter, OpenAI, Ollama Cloud, Azure OpenAI), each with its own Test / Save / Forget buttons.
OpenRouter (recommended cloud — one key, many models)
- Sign up at openrouter.ai. Add a few dollars credit; pay-as-you-go per token, no subscription.
- Create a key — Account → Keys → Create. Copy the
sk-or-v1-...string. - Paste in Notepatra — Settings → AI → OpenRouter section → API Key field → click Test. Green = good. Click Save.
- Pick "OpenRouter" in the backend dropdown. The model dropdown lists hundreds of models — type
claude,gpt,gemini,grok,kimi,qwen, etc. to filter.
OpenAI direct
- Get a key at platform.openai.com/api-keys. The string starts with
sk-(orsk-proj-for project keys). - Paste in Settings → AI → OpenAI section → Test → Save.
- Pick "OpenAI" in the backend dropdown. Curated list of GPT-4 family + o-series; type
o1orgpt-4to filter.
Azure OpenAI deployment
Azure exposes OpenAI through your tenant — you pay your Azure bill, not OpenAI directly. Setup is fiddlier because you address a deployment not a model.
- Get the three values from the Azure portal:
- Resource name — e.g.
my-openai-east(the part before.openai.azure.com). - Deployment name — what you called the deployment when you created it. NOT the same as the model name. Example:
gpt-4o-prod. - API version — e.g.
2024-08-01-preview. Use the latest stable from the Azure docs. - Key — Azure portal → your resource → Keys and Endpoint.
- Resource name — e.g.
- Settings → AI → Azure OpenAI — fill all four fields. Notepatra builds the URL
https://{resource}.openai.azure.com/openai/deployments/{deployment}/chat/completions?api-version={version}for you. - Test — green = the deployment answered. Save, then pick "Azure OpenAI" in the backend dropdown.
Ollama Cloud
Same Ollama models, served from Ollama's hosted infra. Useful when you want Ollama's catalog without the local RAM cost.
- Sign up at ollama.com and create an API key in your account settings.
- Settings → AI → Ollama Cloud — paste the key → Test → Save.
- Pick "Ollama Cloud" in the backend dropdown. The model list is the cloud catalog.
Per-provider key management
Every key slot has three buttons:
- Test — fires a probe at the provider's
/modelsendpoint with the key. Doesn't save anything; only validates. - Save — writes the key to the platform's secure store (libsecret on Linux, Credential Manager on Windows, Keychain on macOS). Never written to disk in plaintext.
- Forget — wipes the stored key. Use this on shared machines before logging out.
Notepatra never uses a key from one provider against another. Selecting OpenRouter as the backend with no OpenRouter key set will show "OpenRouter key not set", even if your OpenAI key is configured.
Reasoning / thinking models
The Think checkbox in the AI dock toggles reasoning mode. Notepatra normalises the protocol per provider:
- OpenRouter — sends a unified
reasoningfield; works across Claude, o-series, Gemini, DeepSeek-R1, Qwen-QwQ. - Anthropic / Claude (via OpenRouter or direct) — extended thinking with
thinking: { type: "enabled", budget_tokens: ... }. - OpenAI o-series —
reasoning_effortfield. Note: o1/o3 don't accepttemperatureoverrides. - Local (Ollama, llama.cpp) — models that emit
<think>...</think>blocks (Qwen-QwQ, DeepSeek-R1) are auto-detected; Notepatra renders the thinking in a collapsed disclosure.
Troubleshooting
"Key invalid" — but it works in curl
Test button reports failure even though the same key works from a terminal.
- Trailing whitespace — copy-paste from a web page often appends a newline. Re-paste, then click in the field and press End to confirm no trailing characters.
- Wrong slot — OpenRouter keys (
sk-or-...) belong in the OpenRouter slot only. Pasting them in the OpenAI slot will fail because Notepatra won't cross-route. - Org / project header missing — some OpenAI project keys require an
OpenAI-Projectheader. Use a user key (startssk-notsk-proj-) if you don't want to manage that. - Rate-limited the test endpoint — the Test button hits
/models. If you've already burned through the free tier, the probe fails. Top up credit and retry.
Rate-limit errors / 429
Provider replies with HTTP 429.
- Free tier exhausted — most cloud providers cap requests-per-minute on free credit. Add credit or wait the reset window.
- Concurrent requests — Notepatra serialises requests per panel, but if you have multiple AI panels open and all hit the same provider, you can pile on. Close idle panels.
- Exponential backoff — Notepatra retries 429s with exponential backoff up to 3 attempts before surfacing the error in the panel.
Model not in dropdown after refresh
You expect a specific model and it's not in the list, even after clicking refresh.
- Curated list filter — Notepatra ships a curated list of known-good models per backend (v0.1.53+) for a clean UX. The dropdown is editable — type the full model id and press Enter to use any model the provider supports.
- Provider doesn't expose it on
/models— some preview models are key-gated and only appear on accounts with a flag enabled. Type the id manually. - Region / deployment scoping — for Azure, the deployment name is what you address; the underlying model is invisible. Make sure you have a deployment for the model family you want.
Cross-cutting troubleshooting
Backend-agnostic problems and their fixes.
AI panel says "thinking…" forever
- Streaming stalled — for local backends, a busy GPU or a context overflow can hang the stream. Click the Stop button; if that does nothing, restart the local server.
- Network timeout for cloud — Notepatra times out at 120 s of stream silence. If the provider is degraded, you'll see a timeout error after the wait.
- Reasoning model with huge budget — o1, Claude extended thinking, and DeepSeek-R1 can think for minutes on hard prompts. The "thinking" indicator is real progress; check the disclosure if visible.
Stop button doesn't stop
- Local backend mid-prompt-eval — Ollama and llama.cpp can ignore HTTP cancellation during the prompt-evaluation phase (before the first token is emitted). Stop is honoured once generation starts.
- Workaround — for stuck local servers, kill the process:
pkill -f ollama/pkill -f llama-server, then restart. - Cloud — the Stop button closes the SSE stream immediately; the backend may still be generating but you stop being charged for streamed tokens.
Coding mode sends but no tool calls fire
- Model lacks tools capability — the v0.1.55 capability probe should have hidden it. If you bypassed the probe by typing the id manually, the model may not emit tool calls regardless of prompt.
- llama.cpp without
--jinja— restart withllama-server --jinja ...so the chat template is applied. Without it, the model never sees the tool schema. - "Share file with AI" toggle off — Coding Mode without the file context is still functional, but the model can only act on what you paste. Toggle the red lock by the AI dock title to share the current file.
Capabilities probe never returns
- Model not yet pulled all layers — first
/api/showon a model triggers full metadata read; takes a few seconds. Wait, then refresh. - Backend timeout — Notepatra times the probe at 5 s. If your local server is overloaded, the probe fails open (assumes no special caps) — switch to Chat mode to keep working.
Where Notepatra stores AI config
- Backend selection, model id, base URL, options —
~/.config/notepatra/preferences.jsonon Linux,%APPDATA%\notepatra\preferences.jsonon Windows,~/Library/Application Support/notepatra/preferences.jsonon macOS. - API keys — never in JSON. Linux: libsecret (e.g. GNOME Keyring). Windows: Credential Manager. macOS: Keychain.
- Chat history —
~/.config/notepatra/ai_history/(or platform equivalent), one JSON file per session. - Reset — close Notepatra, delete
preferences.json, click Forget in each provider section to clear keychain entries. Restart for a clean slate.
OpenAI-compat (LM Studio, Jan, vLLM, KoboldCpp, llamafile, OpenRouter)
Any local server speaking the OpenAI /v1/chat/completions API works. Paste its URL in Settings → Preferences → AI → OpenAI-compat → Base URL. If the server requires an API key (e.g. OpenRouter, OpenAI itself), paste it in the API Key field — it's sent as a Authorization: Bearer header. Ollama and llama-server ignore it.
| Server | Typical URL | Notes |
|---|---|---|
| LM Studio | http://localhost:1234 | GUI for GGUF — download models with one click. |
| Jan | http://localhost:1337 | Cross-platform, open source, great UX. |
| vLLM | http://localhost:8000 | Fastest for batch inference on GPUs. |
| KoboldCpp | http://localhost:5001 | GGUF loader with roleplay features. |
| llamafile | http://localhost:8080 | Single-file self-contained GGUF executable. |
| text-generation-webui | http://localhost:5000 | Power-user web UI for local models. |
| OpenRouter | https://openrouter.ai/api | Cloud proxy — requires API key. |
AI Fix pipeline
The AI Fix button in JSON Tools / HTML Tools / Bracket Tools sends the broken content to Ollama with a strict prompt. Here's exactly what happens:
Request
POST http://localhost:11434/api/generate
Content-Type: application/json
{
"model": "qwen3.5:9b",
"prompt": "Fix ONLY the broken parts of this JSON. Make MINIMAL
changes. PRESERVE the original line order, key order, and
formatting. Do NOT reorder keys. Do NOT reformat. Return
ONLY the corrected JSON.\n\nBROKEN JSON:\n{...}",
"system": "You are a minimal-change JSON patcher. ... /no_think",
"stream": true,
"think": false,
"options": { "temperature": 0.1, "num_predict": 4096 }
}
Response cleanup pipeline
Models don't always follow instructions. The response goes through a defensive cleanup pipeline before being displayed:
- Strip
<think>...</think>blocks — defensive regex strip for models that emit reasoning despitethink: false. - Strip markdown
```fences — if the response starts with```, find the first newline and the last fence and keep the middle. - Trim leading prose — find the first
{or[in the response and discard everything before. Handles "Here is the fixed JSON: {...}". - Format with Rust's
serde_json— if parseable, pretty-print with 4-space indent. Otherwise show raw cleaned text.
Why the strict prompt matters
Without explicit instructions, models love to reformat and reorder keys alphabetically. That's useless for a diff — every line looks different even when only one comma was missing. The strict prompt tells the model to preserve the original line order, key order, and indentation, and patch only the broken parts. Show Diff then shows just the actual fixes.
AI Assistant dock
Open: Ctrl+Shift+A, View → AI Assistant, or from the status bar. The panel is a persistent right-side dock (not an editor tab) — one conversation, preserved across tab switches. Layout:
- Backend dropdown — pick Ollama / llama.cpp (GGUF) / OpenRouter / LM Studio / Jan / OpenAI / Custom (any OpenAI-compatible URL). API key edits inline.
- Model dropdown — populated live from the backend's tags / models endpoint with a ↻ refresh button.
- Coding Mode toggle — flips the panel into 3-pane layout (file tree · editor · AI chat) and turns on agentic tools (see below).
- Show thinking checkbox — toggle reasoning for Qwen3 / DeepSeek-R1 / GPT-OSS / Hermes-3.
- Reset button — wipes chat history (in-memory + the on-disk per-workspace history file).
- Red close button (
×in#E81123) — closes the dock; chat session is preserved. - Chat area — user bubbles right-aligned; assistant bubbles left-aligned with the model name in the header. Per-code-block ⧉ Copy code buttons inside every response. Streaming stats render live (
tok / tok-s / s). - Quick actions (revealed by ▸ chevron when Coding Mode is OFF) — three rows of buttons:
- Row 1: Explain · Find Bugs · Refactor · Write Tests
- Row 2: Add Comments · Generate Docs · Optimize · Translate (Py ↔ JS)
- Row 3 (v0.1.40): Fix JSON · Fix HTML · Fix SQL — strict minimal-change patcher for the selection / file. Won't add fields, won't reformat, won't restructure.
- Insert at Cursor / Replace Selection / Copy — apply the last assistant response to the editor.
- Input bar — 📎 attach (image / PDF / DOCX / PPTX / XLSX / text), 🎙 voice (uses
arecord+ localwhisperCLI when installed), text field, Send, Stop.
Streaming tokens flow into the active assistant bubble in real time. Errors render as red error bubbles.
Workspace awareness
Every prompt to the AI dock automatically carries the right context — Notepatra figures out what the model needs:
- The active selection (or the full current file if no selection)
- Excerpts of every other open editor tab
- The workspace root path
- A flat listing of every file under the workspace (with
.git/node_modules/target/dist/__pycache__filtered out)
So the model can reference files you haven't opened yet — "import from utils.py" works even when utils.py isn't in a tab. The block is gated by intent so casual chat ("hi", "thanks") doesn't get spammed with workspace dump.
Persistent chat history (v0.1.39+)
Conversations survive app restart. Stored at:
~/.config/notepatra/chat-history/<sha1-of-workspace>.json
One file per workspace; switching workspaces loads the right history. Saves debounced 2 s, capped at 1 MB per workspace (oldest messages roll off). Reset deletes the on-disk file. Atomic write via .tmp + rename so a kill-9 mid-write never leaves a corrupt history.
Fix-intent detection (v0.1.40+)
Type fix my json / repair this html / the sql is broken in the chat input and the system prompt automatically swaps to a strict minimal-change patcher (same rules as Tools → JSON Tools → AI Fix). Models stop "improving" the input by adding fields, reordering keys, or restructuring.
Does NOT trigger on:
explain my json/describe this json/what is jsonshow me my json files/list all json files/find jsonteach me jsonfix my code(too generic — chat handles it normally)
Implementation lives in src/ai_intent.{h,cpp}. 49 assertions in test_ai_intent cover positive intents (case-insensitive, mixed phrasing), negatives (explain / describe / teach / show / list / find / grep), and edge cases (multi-line, @file mention, generic "fix my code").
Coding Mode — agentic tool-using AI (v0.1.35+)
Tick Coding Mode in the AI dock and Notepatra becomes an agent: the model can read your files, list directories, search, write files, and apply line-level edits — all on its own. The dock flips into a 3-pane layout (file tree · editor · AI chat).
Bottom segmented toggle — Chat / Compose / Agent (v0.1.61+)
The old top-of-panel Chat | Composer tabs are gone. v0.1.61 introduced a 3-segment toggle at the bottom of the AI dock — matching the iOS / Slack keyboard-accessory mental model that Continue.dev, Copilot Chat, and Cursor 3.0 all converged on:
- Chat — free-form conversation. The agent reads + searches but doesn't write.
- Compose — agent's write_file / apply_diff results land in the inline Edit Plan with per-file dry-run diffs. Nothing hits disk until you click Apply Selected or Apply All.
- Agent — write_file / apply_diff execute immediately (still subject to the path-safety / per-turn budget guards). The editor reloads the buffer automatically when a file the agent touched is currently open.
Per-hunk Stage / Revert from the editor gutter (v0.1.62+)
Click any green / red / blue change marker in the editor's git gutter (margin 3) and a hunk popup anchors at that line showing the before-vs-after content (an embedded DiffView) with three buttons:
- Stage hunk — pipes a unified-diff slice through
git apply --whitespace=nowarn --cached. Same safety rails as the AI tools (workspace anchor + credential deny-list + 5000-line hunk cap). - Revert hunk — reverses the patch direction so the working tree matches HEAD for just that hunk.
- Copy diff — drops the synthesized diff to your clipboard.
The Compare widget (Tools → Compare Files) gets the same buttons in a docked strip — one row per contiguous diff region (Hunk N/M · rows a–b · [Stage] [Revert] [Jump →]).
Marker-based merge resolution (v0.1.62+)
Files with UU (both-modified) status in the Git panel now show a Resolve button instead of the +/− shortcuts. Clicking it opens the merge helper widget which scans column-0 <<<<<<< / ======= / >>>>>>> markers and surfaces Take ours / Take theirs / Take both / Jump → buttons per conflict region, plus QScintilla annotation labels above each conflict. Full 3-way LOCAL/BASE/REMOTE merge editor is v0.2 scope; the marker-based path covers ~90% of conflicts.
Vision drag-and-drop (v0.1.61+)
Drop an image (PNG / JPG / WEBP / GIF / BMP) or document (PDF / DOCX / PPTX) onto the AI dock. If the active model is not vision-capable, you get a styled error bubble listing alternatives — local qwen2.5vl:7b / gemma3:4b, cloud claude-sonnet-4-6 / gpt-5 / gemini-2.5-flash. Detection:
- Ollama via the
/api/showcapability cache (returnscapabilitiesarray; conservative-false on empty cache so a slow Ollama doesn't silently drop your attachment). - Cloud via May-2026 prefix allowlist for Claude 3.5+, GPT-4o/4.1/5, Gemini 1.5+, Sonnet 4.x, etc.
Smart input gating + context guards (v0.1.61+)
The chat input + Send button now disable whenever the model dropdown is in a placeholder state — (detecting…), (Ollama offline), (no models installed), (API key required). State-specific placeholders explain why with concrete next steps (ollama serve, ollama pull qwen2.5-coder:7b, click ⚙). Additional guards:
- Coding Mode without a workspace → points you to File → Open Folder.
- Data Mode without a saved DB connection → points you to Manage Connections.
- Wrong-capability model in Coding mode → refusal card lists tool-capable alternatives.
Agentic tools
The agent gets 5 native tools (v0.1.40 surface). Every tool call shows up as an inline 🔧 toolname (args) → result card in the chat:
| Tool | What it does |
|---|---|
read_file(path, offset?, limit?, with_line_numbers?) | Read a text file from the workspace. Default emits N\t line-number prefix per line so the model can reference exact lines. v0.1.40: pass with_line_numbers=false to get raw content (recommended when feeding lines into apply_diff old_lines). |
list_dir(path) | List one level of entries with type (file/dir) and size. Filters .git / node_modules / target / dist / __pycache__ / .gradle / .idea / .vs. |
search(pattern, path?, regex?, glob?, case_sensitive?, max_matches?) | Find a string or regex across the workspace. Returns up to 50 (default) / 200 (max) matches with file path, line, column, and a snippet. Same heavy-dir skip-list as the file tree. |
write_file(path, content, mode?) | Create or overwrite a text file. Modes: overwrite (default) / create (fails if exists, returns error_kind: exists) / append. Auto-creates parent directories inside the workspace; refuses paths matching the credential deny-list (.ssh, .pem, .key, id_rsa*, /etc/passwd, etc.). 5 MB content cap. After success the file auto-opens in a new tab — or the buffer reloads if it's already open. |
apply_diff(path, hunks) | Atomic line-level edits. Each hunk has old_start_line + old_lines (expected current text) + new_lines (replacement). Two-phase apply: validates ALL hunks against the live file first → if any drifted, returns error_kind: conflict and nothing is written. Otherwise hunks apply in reverse-line-order so earlier indices stay stable. v0.1.40 three-tier match: strict → strip read_file's N\t prefix from old_lines → .trimmed() comparison. Relaxed tiers still apply but emit result.warnings so the agent self-corrects on the next read. Atomic write via .tmp + std::rename. |
Three-layer path security
Every tool call goes through resolveSafePath:
- Workspace anchor + canonicalize. Relative paths resolve against the workspace root; absolute paths are accepted only if they canonicalize back inside it. Catches
../../../../etc/passwdand symlink-to-secrets attacks. - Hardcoded deny-list. Refuses
~/.ssh/,*.pem,*.key,id_rsa*,/etc/passwd|shadow,~/.gnupg/,~/.aws/,~/.netrc,~/.npmrc,~/.docker/config.json, etc. Applied to the candidate path itself, so creating~/.ssh/foovia a mkpath chain is still refused. - Structured errors. Refusals come back as
error_kind: outside_workspace | denied | not_found | exists | binary | too_large | conflict | malformed_args | timed_out— the model knows what to fix.
Per-turn budget (Hard cap)
Hard cap of 25 tool calls per user turn to prevent runaway loops on confused models. When exhausted, the agent receives: "Tool-call budget exhausted (25 calls this turn). Stop and summarise what you've found."
Tool-call wire format
Notepatra uses each backend's native tool-call protocol:
- Ollama —
/api/chatwith thetoolsarray. Some open-weight models also need--jinja(we surface a friendly warning when the response shape suggests this). - llama.cpp —
llama-serverwith--jinjarequired. OpenAI-compatibletool_callsin the response. - OpenAI-compat (OpenRouter, OpenAI, LM Studio, vLLM, Anthropic via OpenRouter, Gemini via OpenRouter, Jan, etc.) — standard SSE streaming with
finish_reason: tool_calls.
v0.1.40 surfaces malformed tool-call JSON as a structured error_kind: malformed_args result back to the model — so a model that emits unescaped quotes in arguments gets a clear "re-emit with valid JSON" message + raw-args preview, instead of silently passing empty args downstream.
Models that work well in Coding Mode
Notepatra has a model allowlist for Ollama (other backends always send tools and trust the server's support detection). Confirmed working:
- Qwen3 / Qwen3.5 — best small-model tool-call support today.
- Llama 3.1+ (3.1 / 3.2 / 3.3) — first Meta line with native tool calls.
- Hermes-3, Mistral-Nemo, Granite 3+, GPT-OSS, Command R / R+.
- Cloud via OpenAI-compat: GPT-4o / GPT-5, Claude 3.5+ / Sonnet 4+, Gemini 1.5+ via OpenRouter.
Models not recommended: phi-3-mini / phi-3.5 (no tool support), gemma-2-2b (sometimes hallucinates tool calls), tinyllama, llama 3 base.
Data Analyst Mode (v0.1.43+)
Tick the new Data checkbox in the AI dock header (next to Coding) and the AI assistant becomes a data analyst: it can query attached CSVs, run SQL against your saved database connections, and emit live charts inline in the chat. Mutually exclusive with Coding Mode so the panel stays focused.
What changes when Data Mode is on
- Header band turns orange — "AI · 📊 DATA" — so it's unmistakable which mode is active.
- A Manage Connections… button appears under the model picker.
- Attached CSVs get a structured digest instead of a raw text dump (delimiter sniff, header probe, per-column type — Integer / Real / Boolean / Date / DateTime / Text — null counts, ranges, head + tail rows, capped at 4 KB).
- The system prompt swaps to a data-analyst persona that emits Findings → Method → Suggested follow-ups structure.
- Two new agentic tools (below) are attached to the request alongside
read_file/list_dir/search. - If you have a
.notepatra/data-analyst.mdin your workspace, its contents are auto-prepended as a "Project data context" layer (capped at 8 KB).
The two new agentic tools
| Tool | What it does |
|---|---|
csv_query(file_path, sql, max_rows?, max_load_rows?) | Loads a workspace CSV into in-memory SQLite (table name csv, column names from the header) and runs your SQL. Default loads up to 250,000 rows; default returns up to 500. Lets the model ask "SELECT category, SUM(amount) FROM csv WHERE order_date >= '2026-04-01' GROUP BY category ORDER BY 2 DESC" instead of trying to mentally scan a million-line file. |
query_sql(connection_name, sql, max_rows?, confirm?) | Runs SQL against a saved connection. SELECT-only by default (also: WITH / EXPLAIN / PRAGMA / SHOW / DESCRIBE). INSERT / UPDATE / DELETE / DDL require confirm:true after the user explicitly approves — no implicit mutations. Caps results at 500 rows. |
Live charts inline in the chat
v0.1.64+ note: charts now ship behind the optional charts pack. On a default (lite) install, the AI's generate_chart tool result paints as a 📊 Chart rendering requires the Charts Pack card with two buttons — [Install charts pack] (opens GitHub Releases with the matching -full asset for your OS) and [View JSON instead] (shows the raw Vega-Lite spec in a fenced code block). See Lite vs Full for the upgrade paths. The flow described below is what you see once you're on the Full flavor.
When a visualization clarifies the answer, the model emits a fenced ```chart block with a small JSON spec. Notepatra parses each one and embeds a real interactive QChartView under the assistant's prose:
```chart
{
"type": "bar",
"title": "Revenue by category, last 30 days",
"x": "category",
"y": "revenue",
"data": [
{"category": "electronics", "revenue": 284921.50},
{"category": "books", "revenue": 52183.25},
{"category": "home", "revenue": 98442.10}
]
}
```
Supported types: line, bar, pie, scatter. Theme-aware. Category-aware axes for string-valued X columns; numeric axes for numeric X. Malformed JSON falls back to displaying the spec as a code block — nothing breaks.
Database connection manager
Click Manage Connections… (visible only when Data Mode is on) to add a new connection. The dialog has a Preset dropdown at the top — pick one and the form fills with sensible defaults for that database type so you only edit what's specific to your server.
Available presets (v0.1.66+):
- SQL Server (localhost, ODBC) — Driver =
QODBC, Host =127.0.0.1, Port =1433, Database =master, Username =sa, Options =DRIVER={ODBC Driver 18 for SQL Server};Encrypt=no - SQL Server Express (named instance, ODBC) — Host =
localhost\SQLEXPRESS, Port = default (named instances use the SQL Browser service), Options includeTrusted_Connection=yesfor Windows Auth - Azure SQL Database (ODBC) — Host =
yourserver.database.windows.net, Options includeEncrypt=yes;TrustServerCertificate=no(Azure requires TLS) - PostgreSQL (localhost) — Driver =
QPSQL, Port =5432, Database =postgres, Username =postgres - MySQL / MariaDB (localhost) — Driver =
QMYSQL, Port =3306, Database =mysql, Username =root - SQLite (file on disk) — Driver =
QSQLITE, Browse… picker focused on Database for the .db path - DuckDB (file or :memory:) — Driver =
DUCKDB, Database =:memory:(also accepts a .duckdb / .csv / .parquet / s3:// path)
Under the form, a hint label paints in green when the Qt SQL plugin for the chosen driver is present (with a usage tip — e.g. "Named instances use the SQL Browser service — leave Port at default") or amber with the exact per-OS install commands when the plugin is missing.
Connections are saved to ~/.config/notepatra/db-connections.json (or platform equivalent). The dialog also has a Test button that opens + closes the connection without committing the record — green tick on success, red driver error on failure.
How to connect — step by step (v0.1.66+)
SQL Server — local Docker for testing
The fastest way to play with SQL Server is the bundled Docker harness:
# 1. Spin up MS SQL Server 2022 + seed a NotepatraTest database
bash scripts/sql-server-local-setup.sh
# 2. Linux only — install Microsoft's ODBC Driver 18:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo gpg --dearmor -o /usr/share/keyrings/microsoft-prod.gpg
curl https://packages.microsoft.com/config/ubuntu/24.04/prod.list | sudo tee /etc/apt/sources.list.d/mssql-release.list
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install -y msodbcsql18 unixodbc-dev libqt5sql5-odbc
# 3. Restart Notepatra. The 'sql-server-local' connection is already
# registered — open Data Mode → Manage Connections to see it.
# Tear down later:
bash scripts/sql-server-local-setup.sh --teardown # stop + remove container
bash scripts/sql-server-local-setup.sh --wipe # also drop the data volume
The script creates a NotepatraTest database with customers, products, and orders tables already populated, plus a sql-server-local entry in your db-connections.json. After installing the ODBC driver, you can ask the Data Analyst:
- "Show top-3 customers by total spend from the sql-server-local connection"
- "List the schema of dbo.orders"
- "What's the average order value per country?"
SQL Server — connecting to an existing server (Windows / macOS / Linux)
- In Manage Connections, pick the SQL Server (localhost, ODBC) preset to pre-fill the form.
- Change Host to your server's address. For named instances (e.g. SQL Express):
SERVER\INSTANCEand set Port to default. - For SQL Authentication: enter your SQL login + password.
- For Windows Authentication (domain account / current Windows user, Windows only): clear Username, leave Password empty, and add
;Trusted_Connection=yesto Options. - For TLS-required servers (Azure, modern on-prem): change Options to
DRIVER={ODBC Driver 18 for SQL Server};Encrypt=yes;TrustServerCertificate=no. - Click Test. If green, click Save Changes then OK.
If Test fails with "data source name not found and no default driver specified", your DRIVER={…} string doesn't match a driver actually installed. Run odbcinst -q -d (Linux/macOS) or ODBC Data Source Administrator (Windows) to see which driver names are available — use the exact name in your Options string.
PostgreSQL — local + managed (RDS / Cloud SQL / Neon / Supabase)
- Pick the PostgreSQL (localhost) preset.
- For a local Postgres, leave everything at the defaults and enter your password.
- For a remote / managed Postgres, set Host to your endpoint (e.g.
myapp.abc123.us-east-1.rds.amazonaws.com), Database to your specific DB name (RDS doesn't acceptpostgresas default). - For TLS-required servers (any managed PG should require TLS), set Options to
sslmode=require(orsslmode=verify-full;sslrootcert=/path/to/ca.pemfor cert pinning). - Unix-socket connections (lower latency on the same host): leave Host empty, set Options to
host=/var/run/postgresql.
MySQL / MariaDB
- Pick the MySQL / MariaDB (localhost) preset.
- For local servers, defaults are fine — set your password and the database name.
- For RDS / PlanetScale / managed MySQL, set Options to
SSL_CA=/etc/ssl/certs/ca-certificates.crt;MYSQL_OPT_CONNECT_TIMEOUT=10. - For Unix-socket: clear Host, add
UNIX_SOCKET=/var/run/mysqld/mysqld.sockto Options.
SQLite — file-based
- Pick the SQLite (file on disk) preset — focus jumps to Database.
- Click Browse… and pick the .db / .sqlite / .sqlite3 file (or type a path — the file is created if it doesn't exist).
- No host / port / username / password — SQLite is file-based.
DuckDB — files, S3, in-memory
- Pick the DuckDB (file or :memory:) preset — Database defaults to
:memory:. - The Database field is multi-mode — point it at:
:memory:— ephemeral DB/path/to.duckdb— persistent DuckDB file/path/to.csv,/path/to.parquet,/path/to.json— DuckDB reads them directly (no import step)s3://bucket/key.parquet— S3 via DuckDB's httpfs extension; fill Options withregion;access_key_id;secret;session_token
- No host / port / username / password — DuckDB runs in-process via the native libduckdb engine.
Driver availability per OS
| Driver | Linux (Debian/Ubuntu) | macOS (Homebrew) | Windows |
|---|---|---|---|
| QSQLITE | Built into Qt — always available | Built into Qt — always available | Built into Qt — always available |
| QPSQL (PostgreSQL) | sudo apt-get install libqt5sql5-psql | bundled in brew install qt@5 | Bundled with the Notepatra Windows release |
| QMYSQL (MySQL/MariaDB) | sudo apt-get install libqt5sql5-mysql | bundled in brew install qt@5 | Bundled with the Notepatra Windows release |
| QODBC (SQL Server) | sudo apt-get install libqt5sql5-odbc unixodbc-dev msodbcsql18 (see MS install guide for the package repo setup) | brew tap microsoft/mssql-release && HOMEBREW_ACCEPT_EULA=Y brew install msodbcsql18 unixodbc | Bundled with the Notepatra Windows release (Microsoft ships ODBC drivers with Windows itself) |
| DUCKDB | Bundled in the Notepatra Linux release (vendored libduckdb.so) | Bundled in the Notepatra macOS release | Bundled in the Notepatra Windows release |
Troubleshooting
- "Driver not loaded" — the Qt SQL plugin for your driver isn't installed. The Manage Connections dialog now paints an amber install-command hint under the form when this happens; copy-paste the command for your OS.
- "data source name not found" (QODBC) — your Options
DRIVER={…}string references an ODBC driver that isn't installed. Linux/macOS:odbcinst -q -dlists installed drivers. Windows: ODBC Data Source Administrator (in Administrative Tools). Use the exact name in your Options. - "Server certificate not trusted" — add
TrustServerCertificate=yesto QODBC Options (test environments only), or install the server's CA cert. - "FATAL: password authentication failed" (QPSQL) — usually a
pg_hba.confmismatch. For local-only testing, addhost all all 127.0.0.1/32 md5at the bottom ofpg_hba.confandpg_ctl reload. - "Can't connect to MySQL server on 'localhost'" — MySQL often binds Unix-socket-only by default. Either start it with
--bind-address=127.0.0.1or use socket-mode (clear Host, addUNIX_SOCKET=…to Options).
Browse Schemas… (Database tree dialog, v0.1.55)
Click Browse Schemas… in the Data Analyst panel to open a tree view of every saved connection. Schema introspection is lazy — clicking a connection node loads its schemas; clicking a schema loads its tables; clicking a table loads its columns + types. Supports SQLite (via sqlite_master), DuckDB (via the native listTables / describeTable wrappers), and any database with INFORMATION_SCHEMA (PostgreSQL, MySQL, SQL Server). Live filter input narrows the tree by name. Right-click any node:
- Send schema to AI — pins the table's column list to the next prompt as authoritative schema context.
- Sample 10 rows — runs
SELECT * FROM <table> LIMIT 10in a results pane. - Copy SELECT * — drops a fully-qualified
SELECT * FROM schema.tableinto your clipboard.
Honest limitation: connection passwords are obscured at rest (XOR + base64), NOT real encryption. This is "don't show plaintext to people walking past my screen", NOT "survives a stolen laptop." For production secrets, use OS keychain / .pgpass / instance-role IAM. OS-keychain integration is a future candidate.
Project-level data context: .notepatra/data-analyst/ (v0.1.55)
Drop one or more markdown / SQL files into a .notepatra/data-analyst/ directory in your workspace and Notepatra auto-prepends their contents to the system prompt as a "Project data context" layer when Data Mode is on. Per-workspace, version-controllable. Conventions:
instructions.md— high-level analyst guidance ("treat NULL amounts as 0", "fiscal year matches calendar")data-dictionary.md— column-by-column meaning for tricky tablesbusiness-rules.md— domain rules, edge cases, segment definitionskpis.md— definitions of revenue / churn / retention / ARR / LTV in your businesssample-queries.sql— reference SQL the model can adapt- any other
*.md/*.sqlfiles in the directory get loaded too
Caps: 64 KB total across all files, 16 KB per file. The Welcome card surfaces the loaded-files count so you know context was picked up. The legacy single-file .notepatra/data-analyst.md is also still supported.
- The `orders` table joins `customers` on `customer_id`.
- Treat NULL in `amount` as 0 (legacy import bug, never backfilled).
- `rate` is a percentage stored 0–1, not 0–100. Don't average it.
- Q1 = Jan-Mar, fiscal year matches calendar.
Then ask "show me last quarter's top-revenue customers" and the model already knows the schema and quirks.
Model capability gating
Multi-table SQL and chart-spec emission are harder than read_file. AiTools::modelCapableOfDataAnalysis() allowlists frontier cloud models (Claude 4.x, GPT-4 / 5, Gemini 2.x, DeepSeek-V3) and local models ≥7B params from strong families (qwen2.5-coder, llama3.x, mistral-large). When you toggle Data Mode on with a model below the bar (e.g. llama3.2:1b), the panel shows an inline orange banner suggesting capable alternatives. The mode still works — the banner is the heads-up, not a hard block.
What's NOT in v0.1.43 (deferred)
- PPT / image data extraction. Attaching a
.pptxcurrently falls back to the existing text extractor; chart images aren't OCR'd into structured data. v0.1.44+ candidate. - Real OS-keychain integration for connection passwords. The XOR obscuration is a placeholder. v0.1.44+ candidate.
- Multi-series charts. Current spec supports a single
x/yper chart; multi-line / stacked-bar in v0.1.44+. - Chart export to PNG / SVG. The QChartView is interactive; right-click → save image as a workaround.
Thinking models
Qwen3 / DeepSeek-R1 and other "thinking" models emit <think>...</think> reasoning blocks before the actual answer. For JSON Tools, this breaks the parser (the thinking block isn't valid JSON). Notepatra's cleanup pipeline strips <think> tags three ways:
- Passes
think: falsein the/api/generaterequest body (honored by modern Ollama) - Appends
/no_thinkto the system prompt (honored by some models as a slash-command) - Regex-strips
<think>...</think>from the final response as a defensive backup
For the AI Assistant chat, you can toggle Show thinking to see the reasoning if you're curious.
Vision models & file attachments
The 📎 attach button accepts any file type. What happens depends on the kind:
| File kind | Handled how |
|---|---|
| 🖼 Images (png/jpg/webp/gif/bmp) | Loaded as QImage, downscaled to max 1280px, re-encoded as PNG, base64-encoded, passed in the images field of /api/generate. Vision models (llava, llama3.2-vision, qwen2-vl, moondream, granite-vision) actually see the image. Non-vision models silently ignore the field. |
pdftotext -layout via QProcess (requires poppler-utils), extracted text appended to the prompt as context, capped at 100 KB. | |
| 📘 DOCX / ODT | unzip -p file.docx word/document.xml, XML tags stripped, appended to prompt. |
| 📙 PPTX | unzip -p file.pptx ppt/slides/slide1.xml, XML stripped, appended (first slide only for now). |
| 📗 XLSX | unzip -p file.xlsx xl/sharedStrings.xml, appended. |
| 📄 Text / code (*.txt / *.md / *.json / *.py / *.cpp / *.js / ...) | Read raw as UTF-8, capped at 100 KB, appended as context. |
| Anything else | Attempted as text (UTF-8). |
Keyboard shortcuts
File
| Ctrl+N | New |
| Ctrl+O | Open |
| Ctrl+S | Save |
| Ctrl+Shift+S | Save All |
| Ctrl+W | Close tab |
| Ctrl+Shift+T | Reopen last closed tab |
| Ctrl+P |
Edit
| Ctrl+Z | Undo |
| Ctrl+Y / Ctrl+Shift+Z | Redo |
| Ctrl+X / Ctrl+C / Ctrl+V | Cut / Copy / Paste |
| Ctrl+A | Select all |
| Ctrl+D | Duplicate line |
| Ctrl+Shift+K | Delete line |
| Ctrl+/ | Toggle line comment |
| Ctrl+Shift+U | UPPERCASE selection |
| Ctrl+U | lowercase selection |
| Alt+↑ / Alt+↓ | Move line up / down |
Search
| Ctrl+F | Find |
| Ctrl+H | Replace |
| F3 / Shift+F3 | Find next / previous |
| Ctrl+G | Go to line |
| Ctrl+B | Go to matching brace (swivels between open/close) |
| Ctrl+F2 | Toggle bookmark |
| F2 | Next bookmark |
| Shift+F2 | Previous bookmark |
View / Navigation
| Ctrl+= / Ctrl+- | Zoom in / out |
| Ctrl+0 | Reset zoom |
| Ctrl+Tab | Next tab |
| Ctrl+Shift+Tab | Previous tab |
| Ctrl+Shift+E | Toggle File Explorer sidebar |
| Ctrl+Shift+A | Toggle AI Assistant dock |
| Ctrl+Shift+G | Open Project Search |
| F11 | Full screen |
Macro
| Ctrl+Shift+R | Start / stop macro recording |
| Ctrl+Shift+P | Play last macro |
Command-line flags
notepatra [options] [file1] [file2] ...
Options:
-h, --help Show help and exit
-v, --version Show version and exit
-n, --new Open a new window, don't restore session
--line N Go to line N in the first file
--theme NAME Use theme: Light, Dark, Monokai
Examples:
notepatra Open with last session restored
notepatra file.py Open a file
notepatra --line 42 file.py Open at line 42
notepatra --theme Dark Start in dark mode
notepatra *.json Open multiple files
Config file layout
All persistent state lives under:
| OS | Path |
|---|---|
| Linux / macOS | ~/.config/notepatra/ |
| Windows | %LOCALAPPDATA%\Notepatra\ |
Files inside that directory
config.json— editor preferences: theme, font, tab width, word wrap, line numbers, etc.session.json— open tabs, cursor positions, window geometryrecovery/— auto-save backups of unsaved buffers (every 10s)macros/— saved macro files (JSON)plugins/— user-installed external pluginscontext.json— runtime LLM context (editable via the Context page)
Build from source
Dependencies
- Qt5 (5.15.2 or newer) —
qtbase5-dev,libqt5widgets5,libqt5core5a,libqt5network5,libqt5printsupport5 - QScintilla 2.14.1+ —
libqscintilla2-qt5-dev - Rust (1.70+) — via
rustup - CMake 3.16+
- C++17 compiler — gcc 9+, clang 10+, or MSVC 2019+
One-liner (Linux/macOS)
git clone https://github.com/singhpratech/notepatra.git
cd notepatra
./build.sh
Add --tests to also build and run the regression suite via CTest.
Manual CMake
cd notepatra
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=ON
cmake --build . -j$(nproc)
./notepatra --version
Windows (PowerShell, MSVC)
# 1. Install Qt5 via aqtinstall or install-qt-action
# 2. Build QScintilla from Riverbank source via qmake + nmake
# 3. Then:
cd notepatra
mkdir build; cd build
cmake .. -G "Visual Studio 17 2022" -A x64
cmake --build . --config Release
See .github/workflows/build.yml for the exact CI command sequence.
Verifying releases
Every release ships with:
- SHA-256 checksums in
SHA256SUMS - Cosign keyless signatures (Sigstore + Rekor transparency log) —
.sig+.pemper asset - SLSA build provenance attestations cryptographically linking each binary to the commit + workflow + runner
SHA-256
curl -sL -O https://github.com/singhpratech/notepatra/releases/latest/download/SHA256SUMS
sha256sum -c SHA256SUMS --ignore-missing
Cosign verify
cosign verify-blob \
--certificate-identity-regexp '^https://github.com/singhpratech/notepatra/' \
--certificate-oidc-issuer 'https://token.actions.githubusercontent.com' \
--certificate notepatra-linux-x64.tar.gz.pem \
--signature notepatra-linux-x64.tar.gz.sig \
notepatra-linux-x64.tar.gz
SLSA attestation
gh attestation verify notepatra-linux-x64.tar.gz --owner singhpratech
Full threat model and disclosure policy in SECURITY.md.
Troubleshooting
Notepatra opens but no text renders / white-on-white
Likely a lexer palette bug. Try Settings → Theme → Light then back to your theme. If it persists, file a bug with your OS + theme + a screenshot.
AI Fix doesn't do anything
- Check the Ollama Status bar at the top of the JSON Tools panel — is the dot green?
- If red:
ollama servein a terminal, check port 11434. - If green but no response: make sure you've pulled at least one model:
ollama list. - Check Notepatra's stderr for the streaming tokens / error.
Windows: "This program can't start because ... was not found"
The NSIS installer bundles Qt DLLs, but the portable .zip depends on the Visual C++ Redistributable. Install the latest VC++ Redistributable and relaunch.
macOS: "Notepatra can't be opened because it is from an unidentified developer"
macOS Gatekeeper. Right-click the .app, choose Open, then Open again in the warning dialog. Or run xattr -cr Notepatra.app to clear the quarantine attribute.
Linux ARM64 binary doesn't launch
Make sure the file is executable: chmod +x notepatra. Check Qt5 is installed for your architecture.
FAQ
Is Notepatra a port of another editor?
No. Notepatra is its own editor — written from scratch in C++17 with a Rust core for hot paths (mmap I/O, Aho-Corasick search, Myers diff, formatters). It runs natively on Linux x64 / ARM64, macOS Apple Silicon, and Windows x64 from a single codebase.
How big is the binary?
The bare executable is ~9 MB stripped on each platform. Installed footprint ranges 5 MB on Linux (Qt comes from your distro) to ~85 MB on Windows (Qt5 DLLs + QScintilla DLL bundled in the MSI). Truly tiny — and zero browser runtime.
Does my code go to the cloud?
By default, no — Notepatra is local-first. The outbound connections are: (1) your selected local AI backend (Ollama on localhost:11434, llama-server, LM Studio, etc.) when you use AI features, (2) git push/pull when you click the Git panel buttons, (3) the REST client when you send a request, (4) a single GitHub-API call on launch when the auto-updater checks for new releases. Cloud AI backends (OpenAI / OpenRouter / Anthropic via OpenRouter / Gemini via OpenRouter) are opt-in via the AI dock backend dropdown — your code only leaves the machine when you explicitly point Notepatra at one. No analytics, no telemetry. Verifiable with strace.
Why GPL-3.0?
Because the source is open, modifications should stay open. If you embed Notepatra in a product, the product has to be GPL-compatible. If that's a problem, reach out — alternative licensing can be discussed.
Will you add LSP support?
Planned for a future milestone. The goal is to plug into clangd, rust-analyzer, pyright, etc. for proper autocomplete and go-to-definition. The Scintilla autocomplete today is word-based.
Where's the Windows ARM64 build?
Not yet — Windows ARM64 is a small but growing market. Tracked on the roadmap.
Contributing
See CONTRIBUTING.md for the full contributor guide. Short version:
- File issues via the issue templates (bug report, feature request)
- PRs should pass CI (build + tests on all 3 platforms) and include a test when fixing a bug
- Code style:
.clang-format(LLVM base, 4-space indent, 100 col) - Signed commits preferred
Security policy
See SECURITY.md and /.well-known/security.txt. Reports via GitHub private vulnerability reporting. 90-day disclosure window by default.
License
Notepatra is licensed under the GNU General Public License v3.0. No warranty. See GPL-3.0 §15 and §16 for the disclaimer of warranty and limitation of liability.