Vector DB Nuances and Fixes

Your Vector DB Is Lying: The Index Structures Destroying Recall (And How to Fix It) “Your vector database promises 99% recall. You’re getting 67%. Here’s why every index structure makes tradeoffs, and how to choose the right one for YOUR use case.” TL;DR: Vector databases don’t store and search vectors the way you think. HNSW builds a navigation graph that can miss neighbors. IVF clusters can put similar vectors in different buckets. Metadata filtering happens AFTER search, destroying recall. This guide explains how each vector DB actually works, when to use which, and how to configure them properly. ...

October 9, 2025 · 13 min · Lakshay Chhabra

Embeddings 3072: The Dimension Lie

The Dimension Lie: Why Your 3072 D Embeddings Are Mostly Zeros (And How to Fix It) “You’re paying for 3072 dimensions. PCA reveals you’re using ~200. But here’s the twist - those extra dimensions aren’t useless, they’re insurance. Let me show you the math.” TL;DR: Your high-dimensional embeddings aren’t “mostly zeros” - they occupy all dimensions but with exponentially decaying variance and redundancy. Yes, you can compress them to 200-256 dimensions with minimal loss. No, companies aren’t stupid for shipping 3072D models - they’re optimizing for general purpose use. This guide shows you how to measure what YOU actually need and optimize accordingly. ...

October 5, 2025 · 14 min · Lakshay Chhabra

How Words Become Vectors: Embeddings Inside Transformers (Without Tears)

How Words Become Vectors: Embeddings Inside Transformers (Without Tears) “Your embedding API returns a vector. Here are the 12 disasters happening inside and the 5 you can actually fix.” TL;DR: Modern embedding libraries handle CLS pooling disasters for you. But they can’t fix anisotropy (vectors clustering in narrow cones), length bias (longer texts having systematically different magnitude distributions), or domain vocabulary collapse (subword tokenization destroying semantic units). This guide shows what’s breaking, what’s already fixed, and what you still need to handle. ...

October 3, 2025 · 11 min · Lakshay Chhabra

How Spacing and Capitalization Randomly Change Your Model's Entire Personality

How Spacing and Capitalization Randomly Change Your Model’s Entire Personality Add a space before your prompt and watch GPT become 30% dumber. Write in ALL CAPS and suddenly it’s aggressive. Use “pls” instead of “please” and it becomes casual. This isn’t personality, it’s tokenization chaos triggering different training data pockets. TL;DR: " Hello" and “Hello” activate completely different neural pathways. “HELP” vs “help” vs “Help” pulls from different training contexts (emergency manuals vs casual chat vs formal documents). Your model doesn’t have moods, it has tokenization triggered personality disorders. ...

September 8, 2025 · 8 min · Lakshay Chhabra

How Tokenization Murders Your Model's Ability to Do Basic Math

How Tokenization Murders Your Model’s Ability to Do Basic Math GPT-4o can write Shakespeare but struggles with 4-digit multiplication. It’s not stupid, it literally can’t see numbers the way you do. “12345” might be [“123”, “45”] while “12346” is [“1”, “2346”]. Try doing math when numbers randomly shatter into chunks. TL;DR: Tokenizers split numbers inconsistently, making arithmetic nearly impossible. “9.11” > “9.9” according to many models because “.11” and “.9” are different tokens. Your calculator app works. Your $100B language model doesn’t. This is why. ...

September 7, 2025 · 7 min · Lakshay Chhabra

Why Your Vector Database Thinks $AAPL Means Polish Batteries

Why Your Vector Database Thinks $AAPL Means Polish Batteries Your $50K vector database is returning garbage results because “$AAPL” tokenizes as ["$", “AA”, “PL”] and now has the embedding of “dollar + AA batteries + Poland”. Your semantic search for “Apple stock” returns articles about Polish currency. This isn’t a retrieval problem, it’s tokenization murdering your embeddings. TL;DR: Bad tokenization creates noisy embeddings. “COVID-19” split into [“CO”, “VID”, “-”, “19”] has embeddings mixing “Colorado”, “video”, “negative”, and “2019”. Your RAG pipeline is doomed before it starts. Fix tokenization or waste money on larger models trying to compensate. ...

September 6, 2025 · 7 min · Lakshay Chhabra

Tokenization Forensics about Leaks

Tokenization Leaks the Training Set (The Forensics Goldmine) Want to know if GPT-4 was trained on your company’s leaked data? Check if your internal codenames are single tokens. Want to detect if a model saw specific Reddit posts? The tokenizer already told you. TL;DR: Tokenizers are accidental forensic evidence. If SolidGoldMagikarp is a single token, that string appeared thousands of times in training. This is how researchers discovered GPT models trained on specific Reddit users, leaked databases, and private codebases. ...

September 3, 2025 · 6 min · Lakshay Chhabra

Byte Level Tokenizer

Byte-Level Tokenizers Can Bloat Non-English (The Colonial Tax) Hook: Your Hindi users pay 17x more than English users for the same word. Your Arabic users’ prompts fail because they hit token limits 8x faster. This isn’t a bug—it’s algorithmic colonialism baked into your tokenizer. TL;DR: Tokenizers trained on English-heavy data punish non-Latin scripts with massive token inflation. “Internationalization” = 1 token in English, 17 tokens in Hindi. Your global users are subsidizing your English users, and they’re getting worse model performance too. ...

September 2, 2025 · 6 min · Lakshay Chhabra

Tokenisation: Why 90% of LLM Failures Start Here

The Tokenization Papers: Why 90% of LLM Failures Start Here The Hidden Layer That Controls Everything Every prompt you send to GPT, Claude, or Gemini gets shredded into tokens before the model even “thinks.” These aren’t words — they’re compression artifacts from 2021 web crawls that now dictate: Your API bill (why Hindi costs 17x more than English) Your model’s IQ (why it thinks 9.11 > 9.9) Your RAG accuracy (why $AAPL returns articles about Polish batteries) Tokenization is the silent killer of production AI systems. These papers expose the disasters hiding in plain sight. ...

September 1, 2025 · 3 min · Lakshay Chhabra

The Tokenization Decision Tree: When to Train, When to Run, When to Cry

The Tokenization Decision Tree: When to Train, When to Run, When to Cry Hook: A biotech company spent $2M training a custom medical tokenizer for their revolutionary drug discovery model. Six months later, they switched to GPT-4 with a 500-line preprocessing script. It performed better. Their custom tokenizer? Now it’s a $2M reminder that sometimes the “right” solution is the wrong solution. TL;DR: Training your own tokenizer means training your own model ($10M minimum). Extending tokenizers breaks everything. Most “tokenization problems” are solved better with preprocessing hacks than proper solutions. Here’s the decision tree that will save you millions and your sanity. ...

August 31, 2025 · 8 min · Lakshay Chhabra