New voitta-rag features

A follow-up to our earlier looks at voitta-rag vs llm-tldr, the February updates, and the search-scope release.

voitta-rag has kept moving since then. The recent work is less about flashy new connectors and more about something arguably more important: usability. Because — dogfooding is real.

Login got more practical

voitta-rag now supports Microsoft OAuth and Google token validation. That matters because a self-hosted knowledge layer gets much more useful once people can sign in with the accounts they already use for work, instead of maintaining a parallel identity system just for search.

In the Microsoft-heavy shops (yeah, ok, shut up) this also tightens the loop with SharePoint permissions: the same work identity can be used both for login and for permission-aware retrieval.

GDrive specific: URLs can now resolve back to indexed content

One of the more quietly useful additions is source URL resolution. If content came from Google Docs, Sheets, or Slides, voitta-rag can now store the source URL in chunk metadata and resolve that URL back to the indexed material through MCP.

That sounds small until you think about actual workflow. Someone drops a Google Docs link into chat, ticket comments, or an LLM prompt. Instead of treating the link as an opaque pointer and making the assistant start from scratch, voitta-rag can connect it to content it already knows.

This also works well with GDrive-based pointers that appear on your disk as *.gdoc, e.g.

Docker mode looks much more usable

Docker mode now auto-discovers mapped folders, distinguishes managed mounts from ordinary folders, etc. Local filesystem sources also got a real first-class flow instead of feeling bolted on.

This works real well if you can, for example, use GDrive app because your admin does not allow voitta-rag to read GDrive. It can read local GDrive (but see for resolving *.gdoc) and, well, it’s supported nicely.

Claude Code integration got real

There is now a Claude Code plugin setup flow, plus tooling to import Claude Code session history into voitta-rag memory. That is a meaningful step beyond “here is an MCP server” toward “here is a workflow.”

The interesting part is not just convenience. It hints at voitta-rag becoming a memory layer around actual agent work: not only your repos and docs, but also the history of what the assistant did, why, and in what context.

Bulk repo handling improved

Bulk repository import/export got better documentation and a round-trip workflow, and Git sync learned a practical trick: when token auth is in play, SSH repository URLs can be converted automatically to HTTPS.

That is exactly the kind of fix mature tools accumulate. It does not make for a dramatic screenshot, but it removes friction from the real environments where people actually deploy this stuff.

The direction is getting clearer

At first glance voitta-rag looks like “RAG for code and documents.” That is still true, but increasingly incomplete.

What is emerging is a self-hosted knowledge substrate for AI work: identity-aware, connector-rich, MCP-accessible, and increasingly conscious of workflow instead of just indexing. The recent changes are part polish, part plumbing, but together they make the system feel much closer to something a team could rely on every day.

Well… Almost… There’ll be more.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.