Everyone's trying vectors and graphs for AI memory. We went back to SQL
When we first started building with LLMs, the gap was obvious: they could reason well in the moment, but forgot everything as soon as the conversation moved on.You could tell an agent, “I don’t like coffee,” and three steps later it would suggest espresso again. It wasn’t broken logic, it was missing memory.Over the past few years, people have tried a bunch of ways to fix it:1. Prompt stuffing / fine-tuning – Keep prepending history. Works for short chats, but tokens and cost explode fast.2. Vec...
Read more at news.ycombinator.com