/LLM

The LLM Honeymoon Phase Is About To End

- Baldur Bjarnason tl;dr: “This is going to get automated, weaponised, and industrialised. Tech companies have placed chatbots at the centre of our information ecosystems and butchered their products to push them front and centre. The incentives for bad actors to try to game them are enormous and they are capable of making incredibly sophisticated tools for their purposes.”

featured in #548


Using GPT-4o For Web Scraping

- Eduardo Blancas tl;dr: “I’m pretty excited about the new structured outputs feature in OpenAI’s API so I took it for a spin and developed an AI-assisted web scraper. This post summarizes my learnings.”

featured in #546


Classifying All Of The Pdfs On The Internet

- Santiago Pedroza tl;dr: “I classified the entirety of SafeDocs using a mixture of LLMs, Embeddings Models, XGBoost and just for fun some LinearRegressors. In the process I too created some really pretty graphs!”

featured in #545


Looming Liability Machines (LLMs)

- Murat Demirbas tl;dr: “We discussed a paper that uses LLMs for automatic root cause analysis (RCA) for cloud incidents. This was a pretty straightforward application of LLMs. The proposed system employs an LLM to match incoming incidents to incident handlers based on their alert types, predicts the incident's root cause category, and provides an explanatory narrative... The use of LLMs for RCAs spooked me viscerally.”

featured in #544


LLM Applications I Want To See

- Sarah Constantin tl;dr: “But the most creative and interesting potential applications go beyond “doing things humans can already do, but cheaper” to do things that humans can’t do at all on comparable scale.” Sarah shares a list of app ideas. 

featured in #543


AI Engineering For AI Error Resolution

- Dr. Panos Patros tl;dr: Discover how this engineering team used Large Language Models (LLMs) for smarter debugging with AI Error Resolution, a feature that preloads prompts with relevant data, offering instant AI-powered solutions to production issues. Learn about their development journey, key requirements, and the impact on enhancing application reliability and security. Think AI Engineering has nothing new to offer? Read on to see how skilful software engineering still plays a crucial role when working with AI components.

featured in #525


Postgres Is All You Need, Even For Vectors

- Eric Zakariasson tl;dr: “When working with LLMs, you usually want to store embeddings, a vector space representation of some text value. During the last few years, we’ve seen a lot of new databases pop up, making it easier to generate, store, and query embeddings: Pinecone, Weaviate, Chroma, Qdrant. The list goes on. But having a separate database where I store a different type of data has always seemed off to me. Do I really need it?”

featured in #524


Let's Reproduce GPT-2 (124M)

- Andrej Karpathy tl;dr: “We reproduce the GPT-2 (124M) from scratch. This video covers the whole process: First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters, then we hit run, and come back the next morning to see our results, and enjoy some amusing model generations.” 

featured in #523


What We’ve Learned From A Year of Building With LLMs

tl;dr: “We’ve spent the past year building, and have discovered many sharp edges along the way. While we don’t claim to speak for the entire industry, we’d like to share what we’ve learned to help you avoid our mistakes and iterate faster. These are organized into three sections: tactical, operational and strategic.”

featured in #520


Don't Worry About LLMs

- Vicki Boykis tl;dr: Vicki shares challenges of working with LLMs, offering advice to focus on specific use cases, establishing clear evaluation metrics, building modular systems, and troubleshooting complex issues by getting "close to the metal."

featured in #518