/Phillip Carter

So We Shipped An AI Product. Did it Work? tl;dr: “Like many companies, earlier this year we saw an opportunity with LLMs and quickly but thoughtfully started building a capability. About a month later, we released Query Assistant to all customers as an experimental feature. We then iterated on it, using data from production to inform a multitude of additional enhancements, and ultimately took Query Assistant out of experimentation and turned it into a core product offering. However, getting Query Assistant from concept to feature diverted R&D and marketing resources, forcing the question: did investing in LLMs do what we wanted it to do?”

featured in #454


All The Hard Stuff Nobody Talks About When Building Products With LLMs tl;dr: (1) Context windows are a challenge with no complete solution. (2) LLMs are slow and chaining is a nonstarter. (3) Prompt engineering is weird and has few best practices. (4) Correctness and usefulness can be at odds. (5) Prompt injection is an unsolved problem.

featured in #418