Tuesday 7th April issue is presented by UpLevel |
|
|
84% of developers use AI tools. Fewer than 3% of organizations have meaningfully changed how they ship — or can show what it's worth to the business. |
StackUp is a free 10-minute diagnostic that benchmarks your engineering org against peers across ways of working, alignment, velocity, and environment — and tells you which 2–3 changes will have the biggest impact on your AI ROI. |
|
|
|
|
— Alex Piechowski |
|
tl;dr: “After years of codebase audits, the same five signals keep showing up, so I finally put them into a scoring rubric: The Codebase Drag Audit. Five signals, scored 0 to 2. If you hit 4 or above, the code needs direct investment before anything else will help.” |
Leadership Management |
|
|
— Dave Kellogg |
|
tl;dr: “There’s a question I’ve been mulling for a while now, and I think it’s time to write it down: when is it okay to use generative AI in a given business context, and when does it cross a line? I’ll focus on two specific areas I know well — board work and strategic analysis — but I think the principles generalize.” |
Leadership Management |
|
|
|
tl;dr: 84% of developers use AI tools. Fewer than 3% of organizations have meaningfully changed how they ship — or can show what it's worth to the business. StackUp is a free 10-minute diagnostic that benchmarks your engineering org against peers across ways of working, alignment, velocity, and environment — and tells you which 2–3 changes will have the biggest impact on your AI ROI. |
Promoted by Uplevel |
Leadership Management |
|
|
— Rahul Garg |
|
tl;dr: ““AI coding assistants respond to whoever is prompting, and the quality of what they produce depends on how well the prompter articulates team standards. I propose treating the instructions that govern AI interactions as infrastructure: versioned, reviewed, and shared artifacts that encode tacit team knowledge into executable instructions, making quality consistent regardless of who is at the keyboard.” |
Leadership Management |
|
“Too many of us are not living our dreams because we are living our fears.” | | | | — Les Brown |
|
|
|
|
— Anton Zaides |
|
tl;dr: “As everybody and their mother thinks they can build great software right now, I decided to help them avoid a bit of pain. Here are 7 laws every engineer has broken at least once, learned the hard way.” |
BestPractices |
|
|
|
tl;dr: AI is outpacing traditional code review, creating a verification bottleneck. This report breaks down the shift: (1) a growing trust gap: 96% of developers distrust AI output, (2) the move to automated guardrails, and (3) embedding verification directly into the SDLC with a “trusted, but verified” approach. |
Promoted by CodeReview |
CodeReview |
|
|
— Lalit Maganti |
|
tl;dr: “There’s no shortage of posts claiming that AI one-shot their project or pushing back and declaring that AI is all slop. I’m going to take a very different approach and, instead, systematically break down my experience building syntaqlite with AI, both where it helped and where it was detrimental.” |
Tools Productivity |
|
|
— Gergely Orosz |
|
tl;dr: “Many engineers use inference daily, but inference engineering is a bit obscure – and an area rich with interesting challenges. Philip Kiely, author of the new book, “Inference Engineering,” explains.” |
DeepDive |
|
|
— William Pliger |
|
tl;dr: “System architecture diagrams are essential tools for documenting complex systems. However, common mistakes in these diagrams can lead to confusion, misinterpretation, and frustration for viewers. Here’s a rundown of seven (more!) common mistakes to avoid.” |
BestPractices Architecture |
|
Editorial Note |
A friend passed this article on and it stuck with me. |
It likens the economics of AI to the subprime crash, arguing that AI labs (OpenAI, Anthropic) obfuscate finances with zero path to profitability and an irreversibly broken business model. |
The issue is that it the impending doom of these AI labs becomes our own instability, given our staggering adoption, growing over-reliance and rapid workflow integration. |
AI feels like a magical tool to me but, perhaps, part of that magical feeling is that it’s on always on tap? For engineering managers, this raises uncomfortable questions: if code generation isn’t truly free, how should we think about cost, ownership, and quality over time? |
To be clear, I’m not convinced this is inevitable - but the argument is worth sitting with. |
PS. I’m experimenting with this section to pen personal thoughts. Feel free to hit reply and share any feedback. |
|
Most Popular From Last Issue |
What I Learned From Nearly 1,000 Interviews At Amazon - Steve Huynh |
|
Notable Links |
Claude How-To: Visual, example-driven guide. |
EmDash: TypeScript CMS based on Astro. |
Gallery: On-device GenAI use cases. |
Goose: OS AI agent that automates engineering tasks. |
QMD: Search engine for everything you need to remember. |
|
|
How did you like this issue of Pointer? 1 = Didn't enjoy it all // 5 = Really enjoyed it | | 1 | 2 | 3 | 4 | 5 |
|
|