DealForge autonomously sources, scores, and writes investment memos on venture deals. Stop manually hunting.
1,180+ deals tracked · 22 AI investment memos · Updated daily
Show HN: Breathe-Memory – Associative memory injection for LLMs (not RAG)
LLMs forget. The standard fix is RAG — retrieve chunks, stuff them in. It works until it doesn't: irrelevant chunks waste tokens, summaries lose structure, and nothing actually models how memory works.<p>Breathe-memory takes a different approach: associative injection. Before each LLM call, it extracts anchors from the user's message (entities, temporal references, emotional signals), traverses a concept graph via BFS, runs optional vector search, and injects only what's relevant — typically in <60ms.<p>When context fills up, instead of summarizing, it extracts a structured graph: topics, decisions, open questions, artifacts. This preserves the semantic structure that summaries destroy.<p>The whole thing is ~1500 lines of Python, interface-based, zero mandatory deps. Plug in any database, any LLM, any vector store. Reference implementation uses PostgreSQL + pgvector.<p><a href="https://github.com/tkenaz/breathe-memory" rel="nofollow">https://github.com/tkenaz/breathe-memory</a><p>We've been running this in production for several months. Open-sourcing because we think the approach (injection over retrieval) is underexplored and worth more attention.<p>We've also posted an article about memory injections in a more human-readable form, if you want to see the thinking under the hood: <a href="https://medium.com/towards-artificial-intelligence/beyond-rag-building-memory-injections-for-your-ai-assistants-ceedcea20419" rel="nofollow">https://medium.com/towards-artificial-intelligence/beyond-ra...</a>
Heuristic score based on available signals. Funding: $0, Source: show_hn.