/Marc Brooker

The Builder’s Guide To Better Mousetraps tl;dr: “I tend to be biased towards innovation. Towards building. I think most advice for technical leaders over-emphasizes the short-term risks of innovating too much, and under-emphasizes the long-term risks of innovating too little. However, both sides have good points, and we owe it to ourselves and our businesses to think carefully about the decision. Because of my bias, I force myself to deeply question my motivations when making decisions like this,” such as (1) What could I be doing instead? (2) Do I want to own this? (3) Am I solving a simpler problem?

featured in #498


Better Benchmarks Through Graphs tl;dr: “I believe that one of the things that’s holding back databases as an engineering discipline is a lack of good benchmarks, especially ones available at the design stage. The gold standard is designing for and benchmarking against real application workloads, but there are some significant challenges achieving this ideal.” Marc discusses an approach to develop benchmarks that shine light on a database’s design decisions.

featured in #493


How Do You Spend Your Time? tl;dr: “You thought you were productive, and getting a lot done, but they weren’t the things you, or your manager, thought were most valuable for your project and team. You’re busy, you’re productive, but it doesn’t feel right. It’s a problem I’ve faced before, which I think I’ve mostly solved for myself. Here’s some thoughts on what worked for me.”

featured in #492


Optimism vs Pessimism In Distributed Systems tl;dr: Marc discusses the balance between optimistic and pessimistic assumptions. Optimistic assumptions, which anticipate successful outcomes without immediate coordination, contrast with pessimistic ones that proactively coordinate to prevent potential conflicts. The author exemplifies these concepts through distributed caches, Optimistic Concurrency Control, and leases. By identifying and categorizing these assumptions, developers can better understand and optimize system behavior.

featured in #459


Exponential Value At Linear Cost tl;dr: “Binary search is kind a of a magical thing. With each additional search step, the size of the haystack we can search doubles. In other words, the value of a search is exponential in the amount of effort. That's a great deal. There are a few similar deals like that in computing, but not many. How often, in life, do you get exponential value at linear cost? Here's another important one: redundancy,” which Marc discusses here.

featured in #453


Invariants: A Better Debugger? tl;dr: Marc emphasizes the use of invariants, conditions that must hold true during or after code execution, as a powerful debugging tool. Through examples, the author illustrates how developers can use invariants to reason about complex algorithms and distributed systems. Invariants offer a deterministic, repeatable way to understand and ensure correctness, making them a valuable alternative to traditional debuggers.

featured in #437


Atomic Commitment: The Unscalability Protocol tl;dr: Marc describes the classic CS problem Atomic Commitment. "The classic solution to this classic problem is Two-phase commit, maybe the most famous of all distributed protocols. There's a lot we could say about atomic commitment, or even just about two-phase commit. In this post, I'm going to focus on just one aspect: Atomic Commitment has weird scaling behavior."

featured in #360


Software Deployment, Speed, and Safety tl;dr: Deployment is a contentious issue. Some folks will believe that “teams have to be able to deploy from commit to global production in minutes” and “others will point out that their industry has multi-year product cycles.” Marc highlights the trade-offs, provides his perspective and his advice navigating the tension between speed and safety. 

featured in #289


Caches, Modes, And Unstable Systems tl;dr: "Good caches have feedback loops. Like back pressure, and limited concurrency. Bad caches are typically open-loop. This starts to give us a hint about how we may use caches safely, and points to some of the safe patterns for distributed systems caching."

featured in #250


Latency Sneaks Up On You tl;dr: "High-percentile latency is a bad way to measure efficiency, but a good (leading) indicator of pending overload. If you must use latency to measure efficiency, use mean (average) latency."

featured in #248