
Daniel Davis of TrustGraph and Kirk Marple from Graphlit discuss the 2024 state of RAG. Whether it's RAG, GraphRAG, or HybridRAG, a lot has changed since the term has become ubiquitous in AI. Where are we, where are we going, and where should be going are all answered in this discussion. 00:00 00:20 Introductions 04:10 The Term "RAG" Itself 06:20 Long Context Windows 08:10 Claude 3.5 Haiku 11:20 LLM Pricing Variance 14:11 What Happened to Claude 3 Opus? 19:03 AI Maturity 23:22 What is AGI? 26:40 Entity Extraction with LLMs 32:18 RDF? Cypher? Something else? 36:36 Why so many new GraphDBs and VectorDBs? 42:23 Reinventing the Wheel 42:48 "You Don't Need LangChain" 44:20 How to Promote Emerging Projects 46:53 "Hype Matters" 49:15 Where is RAG 1 Year from Now 54:09 Should AI Model Itself on Human Cognition? 58:45 The DARPA MUC AI Conferences 🔗 Graphlit Links: ➡️ Website: https://graphlit.com 🔗 Kirk's Links: ➡️ Twitter: https://x.com/kirkmarple 🔗 Daniel's Links: ➡️ Twitter: https://x.com/trustspooky 🔗 TrustGraph Links: ➡️ GitHub: https://github.com/trustgraph-ai/trustgraph ➡️ TrustGraph Config UI: https://config-ui.demo.trustgraph.ai/ ➡️ Website: https://trustgraph.ai/ ➡️ Discord: https://discord.gg/sQMwkRz5GX ➡️ Blog: https://blog.trustgraph.ai ➡️ LinkedIn: https://www.linkedin.com/company/trustgraph/