Back to all articles
Architecture 6 min readMarch 28, 2025

Pinecone Namespace Strategy for Multi-Tenant RAG: Best Practices

One index, thousands of tenants — using Pinecone namespaces as the isolation boundary. Covers cost model, query pattern, and why we chose this over separate indexes.

Pinecone Multi-Tenant Scalability

Multi-Tenancy in Vector Databases

Building a multi-tenant AI SaaS demands strict data isolation. When User A queries their knowledge base, there must be a mathematically zero percent chance that they retrieve vectors belonging to User B's corporate documents. Data leakage is fatal in SaaS.

Index-per-Tenant vs Namespace Isolation

Initially, developers often try provisioning a brand new Pinecone Index for every single bot or user. This fails miserably at scale. Pinecone limits the number of indexes you can create, and provisioning takes minutes. Cold start times across hundreds of indexes degrade UX.

The industry standard, and the architecture behind VegaRAG, is using a single, globally scaled Serverless Index with unique Namespaces.

# The VegaRAG Pinecone insertion pattern
index.upsert(
    vectors=[...],
    namespace=f"{agent.bot_id}"
)

Advantages of Namespaces

By using the bot_id as the namespace string:

  • Hard boundary isolation: Queries executed on a specific namespace will NEVER scan vectors outside of it.
  • Velocity to deploy: We can instantiate a new tenant instantly because serverless indexes don't require pre-provisioning shards.
  • Filter compatibility: Meta-data filtering (like filtering by specific document URLs within an agent's knowledge base) stacks natively on top of the namespace boundary natively.

To safely implement this, we ensure that the API surface relies entirely on the authenticated user's session token to derive the bot_id server-side, preventing insecure direct object reference (IDOR) attacks from users attempting to brute-force a competitor's namespace.

Build exactly what you just read.

VegaRAG is entirely open-source and ready for production on AWS.