Essays by Kent Beck

Brief summaries of Kent's writing on software design, TDD, extreme programming, and AI-augmented development.

Subscribe on Substack ↗

80 essays · 123K+ subscribers · read by senior software engineers in 202 countries

How does AI change the build versus buy versus customize decision in software?

The economics of build/buy/customize shift when AI can rapidly prototype solutions and reduce implementation friction. Kent Beck examines how generative tools reframe the tradeoff engineers face when choosing between custom development, off-the-shelf products, and hybrid approaches.

Read full essay on Substack ↗

Build a sorted map faster than B-trees by walking keys instead of comparing them

Tries offer O(1) lookups per byte and automatic sorted iteration without key comparisons or balancing, but naive implementations waste memory on sparse keysets. The Adaptive Radix Tree fixes this by layering decisions—lazy expansion, path compression, and polymorphic node sizes—to match trie efficiency with practical space overhead, trading cache misses for control over the data structure's shape.

Read full essay on Substack ↗

When to optimize data structure performance versus code clarity tradeoffs

Adaptive data structures like radix trees require balancing three competing goals: execution speed, algorithmic correctness, and maintainable code. Kent Beck explores how to navigate these tradeoffs when designing for both performance and team comprehension.

Read full essay on Substack ↗

Why AI-generated code ends up in the complexity tarpit and how to escape it

AI genies produce plausible but broken code that accumulates complexity faster than human developers can manage—leaving teams stuck in a "muddling" region where software barely works and can't change. The real leverage isn't better prompts or training data; it's understanding where your team operates on two axes (working features vs. flexible code) and deliberately moving toward both.

Read full essay on Substack ↗

Why multi-agent AI coding tools create coordination overhead instead of solving it

Multi-agent AI systems promise to parallelize coding work, but they often shift the cognitive load onto you—the engineer managing which agent does what. The real problem isn't swarms; it's outcome-orientation. You should describe what you want the code to do and let the system figure out feasibility and cost, not prompt-engineer your way through agent coordination.

Read full essay on Substack ↗

Why passing tests feel boring and what that signals about your test design

When test suites feel tedious rather than informative, it often means your tests are too shallow or decoupled from real behavior. A senior engineer should feel engaged by tests because they reveal design problems and guide refactoring—if you're bored, your tests may be checking implementation details instead of contracts.

Read full essay on Substack ↗

How does mortality change your career compensation strategy after 40

Your discount rate isn't constant—it changes as you age and your time horizon shrinks. Tech compensation is structured around long-term vesting and delayed liquidity that makes sense at 30 but becomes economically irrational in late-career stages. Understanding the time value of money as a function of remaining lifespan, not just interest rates, reveals why your older colleagues make different financial and career decisions than you should.

Read full essay on Substack ↗

How do you reprioritize work and life when facing progressive physical limitation

Kent Beck shares his Parkinson's diagnosis and a framework for decision-making under constraints: the time value of time. When you know your capacity will decline, present years become exponentially more valuable than future ones. This reframes how senior engineers should think about what work to accept, which projects to pursue, and how to allocate energy between earning and living.

Read full essay on Substack ↗

How do you recognize and respond to boundary violations in relationships?

Healthy connection requires both people to meet in the middle of a bridge—literally and emotionally. Red flags appear when one person crosses too far into the other's space: either you're over-giving to force connection, or they're over-sharing without respecting boundaries. When boundaries collapse, manipulation and harm follow. The solution is staying aware of where you are on the bridge and shutting down connections that insist on crossing into your space.

Read full essay on Substack ↗

What software leaders should learn from enterprise AI adoption patterns

Enterprise AI adoption reveals patterns in organizational change, technical decision-making, and leadership priorities that apply beyond AI itself. Kent Beck shares observations from the Enterprise AI Summit that challenge assumptions about how teams integrate new technology and manage uncertainty at scale.

Read full essay on Substack ↗

Why AI model providers cut usage limits simultaneously despite fierce competition

When multiple AI providers cut usage limits at the same time, it's not a capacity crisis—it's a narrative one. The bottleneck isn't chips or compute; it's the investor story about the path to profitability. Whoever bends the supply curve up first through cheaper inference or better unit economics wins the next wave of growth.

Read full essay on Substack ↗

Can AI pair programming handle test-commit-reset workflows without losing context?

TCR (test && commit || revert) forces tight feedback loops by automatically resetting failed code to the last passing state. Beck explores whether AI coding assistants with persistent skill-based context can maintain coherence across these rapid reset cycles—turning genie-style AI into a reliable TDD partner rather than a stateless code generator.

Read full essay on Substack ↗

Detecting small design problems before they become system failures

Small inconsistencies and friction in code are early warning signs of deeper architectural problems. Learning to recognize and act on these "tremors" — before they cascade into major refactors or rewrites — separates teams that maintain velocity from those that slow down under technical debt.

Read full essay on Substack ↗

Why individual programming speed plateaus despite experience and effort

Progress in software engineering feels personal but follows predictable patterns across teams and skill levels. Senior engineers often plateau because they optimize locally—focusing on individual productivity—when systemic constraints like communication, testing infrastructure, and team coordination determine actual throughput.

Read full essay on Substack ↗

Realign incentives to break gridlock between conflicting technical or organizational stakeholders

When two groups are locked in opposition, changing the rules they operate under can transform the dynamic from zero-sum conflict to aligned incentives. Kent Beck illustrates this through Oregon's forest management crisis: by shifting payment structures so stakeholders benefit from long-term health rather than short-term extraction, loggers, environmentalists, and capital all gain. The same principle applies to software teams: misaligned incentives create the architectural equivalent of gridlock.

Read full essay on Substack ↗

How to align your writing and teaching with what engineers actually need

Kent Beck invites readers to shape his future writing by sharing what problems they're actively solving. Understanding your audience's real challenges—not assumed ones—makes technical writing more useful and sponsorships more meaningful. A brief survey surfaces the gaps between what you're building and what deserves public attention.

Read full essay on Substack ↗

Can AI coding assistants break the project constraints triangle

The iron triangle of software projects—speed, cost, and quality—has long forced trade-offs. AI-augmented coding tools like Genie may fundamentally alter this constraint by automating routine work, allowing teams to improve quality and velocity simultaneously without proportional cost increases. This challenges the assumption that you must sacrifice one dimension to gain another.

Read full essay on Substack ↗

How P50 goals and dependency management differ between exploration and extraction phases

Software teams often apply Extract-phase management practices—KPIs, tight schedules, inter-team dependencies—to Explore work, where the actual value lies in learning and discovery. Kent Beck argues that accomplishing 50% of ambitious goals signals healthy exploration: it means you've learned something unexpected or discovered higher-impact work than planned. Extract demands reliability and predictability; Explore demands discovery and minimal dependencies.

Read full essay on Substack ↗

How do you balance shipping features against investing in code health for long-term compounding returns

Kent Beck distinguishes between two development games: The Finish Line Game (spec-driven, one-off delivery) and The Compounding Game (where each completed feature funds the next). Long-term software systems require alternating investment between features and futures—the architectural and design work that keeps complexity manageable as the system grows. This reframes the TDD and refactoring debate: the question isn't whether to tidy code, but which game you're actually playing.

Read full essay on Substack ↗

Using AI code generation to implement Mac GPU-accelerated data structures

Kent Beck pairs with Codex to build GPUSortedMap, a GPU-optimized sorted map for macOS, in real-time. The session shows how modern AI coding assistants handle performance-critical data structures and the back-and-forth needed to ship working code.

Read full essay on Substack ↗

How generational differences affect communication in technical teams

Communication across generational divides in software teams often breaks down due to different norms and expectations rather than technical incompetence. Understanding these gaps—how younger engineers approach problems, collaborate, and interpret feedback—helps senior engineers lead more effectively and build stronger teams.

Read full essay on Substack ↗

Why AI-for-labor-replacement thinking limits software engineering value

Treating AI as a labor-replacement tool narrows its economic and creative potential in software development. Engineers who frame AI as substitution miss opportunities for augmentation, skill elevation, and entirely new capabilities that amplify human expertise rather than eliminate it.

Read full essay on Substack ↗

Will AI code generation make traditional source code obsolete

Source code isn't disappearing—but its role is shifting as AI pair programming becomes standard practice. Kent explores how developers will interact with code generation tools while maintaining the design clarity and intent that pure automation cannot capture alone.

Read full essay on Substack ↗

Beyond cost reduction: what AI truly enables for engineering organizations

Labor replacement is the narrowest measure of AI's value. The real leverage comes from expanding what your team can do—higher revenue per engineer, faster time-to-market, delayed capital costs, and new business models that weren't possible before. Understanding these four levers of value creation separates transformative AI adoption from incremental optimization.

Read full essay on Substack ↗

Breaking down organizational silos with AI-augmented development practices

Organizational silos persist because teams lack visibility into each other's work and decisions. AI tools that surface cross-team dependencies and shared context can reduce friction and improve coordination without requiring process overhead — but only if teams actively use them to communicate intent, not just code.

Read full essay on Substack ↗

How AI pair programming changes exploration and throwaway code decisions

AI-augmented coding flips the economics of experimentation: when an AI can rapidly prototype variations, you can afford to explore multiple solution paths and discard failed attempts without penalty. This shifts how senior engineers think about optionality and learning velocity in complex problem domains.

Read full essay on Substack ↗

Should teams refactor code together or let individuals tidy incrementally

Collaborative refactoring—tidying code as a team practice rather than solo work—forces alignment on standards and builds shared ownership. This approach surfaces design disagreements early, prevents divergent code styles, and turns maintenance into a team ritual that strengthens collective understanding of the system.

Read full essay on Substack ↗

How do you guide AI coding assistants toward better architecture and development practices?

Personas and architectural constraints drive different outcomes when working with AI coding partners. A persona like "code like Kent Beck" improves testing style and naming conventions, while explicit design constraints like "use the Composite pattern" reshape the system architecture—but combining both yields the best results. The real leverage comes from computational selection: running many coding approaches and selecting winners, rather than encoding human expertise into prompts.

Read full essay on Substack ↗

How to build sustainable relationships without overcommitting or withdrawing

Connection isn't binary—it happens in measured steps across bridges you build together. Kent Beck explores a mental model for collaboration where you reach out, share something real, then wait at the midpoint for reciprocal investment. This applies equally to human relationships and design consensus-building: pushing past halfway without mutual movement creates pursuit, not partnership.

Read full essay on Substack ↗

How metrics-driven product development creates user-hostile features

Product teams optimize for measurable engagement because individual contributors need to demonstrate value, creating a systematic incentive to build features that annoy users. The mechanism isn't malice—it's locally rational decisions compounding into enshittification, and no amount of additional metrics can solve a problem rooted in measurement itself.

Read full essay on Substack ↗

Why IDE feedback delays over 400ms destroy flow and what to do instead

Modern IDEs optimize for completeness over speed, forcing developers to wait 30+ seconds for perfect feedback instead of delivering partial answers in 400 milliseconds—the threshold where human attention stays engaged. Respecting the Doherty Threshold means prioritizing ruthless feedback ordering: show the most important signal first, let partial results arrive fast, and measure tools by time-to-first-feedback, not thoroughness.

Read full essay on Substack ↗

How to maintain code quality when AI generates faster than humans can review

When AI pair programming accelerates development past traditional code review's pace, the bottleneck shifts from catching bugs to maintaining structural integrity. Kent Beck explores what code review actually needs to accomplish in augmented development—sanity-checking intent versus output, and preserving codebase health for both human and AI to work effectively.

Read full essay on Substack ↗

{ "hook": "Should you hire junior developers when AI coding assistants compress their ramp time", "description": "Junior developers are typically expensive bets—you invest senior attention during a long "valley of regret" before they become productive. AI coding assistants fundamentally change this math by collapsing the search space and accelerating learning, shrinking the ramp from 24 months to 9 and improving survival rates past breakeven from 64% to 85%. The bet on juniors is profitable again if you manage for learning, not production.", "questions": [ "How do AI assistants change the ROI of hiring junior developers?", "How can you compress a junior's ramp time using AI coding tools?", "Why should engineering teams invest in junior hiring in the AI era?" ] }

Read full essay on Substack ↗

When to optimize infrastructure: exploration vs expansion vs extraction phases

Product development has three distinct phases with different system design priorities: exploration (optimize for experimentation speed), expansion (eliminate bottlenecks before they choke growth), and extraction (scale profitably). Premature infrastructure optimization during exploration reduces your chances of finding product-market fit; waiting until real usage patterns emerge during expansion lets you fix actual bottlenecks rather than speculative ones.

Read full essay on Substack ↗

Last chance $180/year newsletter pricing expires today

Kent Beck's newsletter is running a final day promotion at $180/year (24% discount). This is a time-sensitive offer for engineers interested in regular insights on software design, TDD, and AI-augmented coding.

Read full essay on Substack ↗

How to organize code changes: tidying versus feature work

Tidying—making small, structural improvements without changing behavior—deserves its own commits and review cycles, separate from feature work. This separation lets teams decide consciously whether to tidy before, after, or alongside features, rather than mixing cleanup into feature commits that obscure intent and slow review.

Read full essay on Substack ↗

How AI coding changes what safety means for software engineers

Kent Beck is exploring what psychological and technical safety looks like as AI systems enter the coding workflow—not offering answers yet, but working through the implications in public with senior engineers who pay to shape the thinking. Paid subscribers access early essays on responsibility and coherence at 10x speed, weekly thinking patterns, and direct chat where real problems get solved.

Read full essay on Substack ↗

Why does development velocity crash as features accumulate despite team effort?

Development slows because each feature burns optionality in the codebase — increased complexity, backwards compatibility constraints, and reduced flexibility for future work. The solution isn't to choose between shipping features and maintaining code quality; it's to deliberately invest in restoring optionality between features through targeted tidying, creating a sustainable rhythm of feature-then-options rather than feature-after-feature until the system breaks.

Read full essay on Substack ↗

Aligning code changes with stated intent in version control

When your git history doesn't match your intentions, it obscures why code changed and makes debugging harder. Kent Beck explores how to make commits, messages, and refactoring decisions transparent about what you're actually doing versus what you said you'd do.

Read full essay on Substack ↗

How to reduce test duplication without losing coverage or specificity

Composable tests achieve the same predictive power as redundant, copy-pasted tests while improving readability and maintainability. By separating orthogonal concerns—like computation logic from reporting logic—you can test each dimension independently and combine them minimally, reducing test count from N×M to N+M+1 without sacrificing specificity or the ability to pinpoint failures.

Read full essay on Substack ↗

Why pitching hackathon ideas kills their economic value and exploration potential

Hackathons create value through uncertainty and convex payoffs—surprise breakthroughs that couldn't be predicted. Requiring pitched ideas before approval filters out the highest-potential explorations, exactly when we should embrace chaos over rational gatekeeping. Better strategies expand limited resources or let hackers negotiate access directly rather than killing ideas upfront.

Read full essay on Substack ↗

How to manage engineering constraints and team coordination during product launch countdown

Pre-launch engineering operates under fundamentally different constraints than normal development: downside from mistakes far exceeds upside from new features, time is fixed, and fatigue compounds risk. Kent Beck argues the goal isn't shipping every feature—it's shipping a usable product safely by staying conservative, over-communicating, and protecting team sustainability through sleep and coordination.

Read full essay on Substack ↗

Why measuring lines of code or hours worked destroys software delivery outcomes

The earlier you measure in the effort-to-impact chain, the easier observations become—but also easier to game. Lines of code, pull requests, and hours worked are so disconnected from actual customer outcomes that optimizing them actively incentivizes destructive behavior. A simpler linear analysis, not systems thinking, is enough to predict this failure.

Read full essay on Substack ↗

Should test frameworks distinguish failed assertions from unexpected exceptions

Test frameworks face a design choice: treat assertion failures and unexpected exceptions as equivalent, or separate them into distinct failure modes. This distinction becomes critical when tests need to communicate intent—assertion failures signal expected behavior violations, while exceptions signal broken assumptions or infrastructure problems. The right choice depends on how your team uses tests to drive design and debugging.

Read full essay on Substack ↗

How to teach software engineers AI-augmented coding workflows

Teaching AI-augmented coding requires rethinking how engineers learn to collaborate with LLMs as coding partners. Rather than treating AI as a tool to avoid, the most effective approach frames it as a collaborative technique that amplifies human judgment—combining machine speed with human reasoning about design, trade-offs, and long-term code quality.

Read full essay on Substack ↗

How will AI-accelerated coding change programmer demand and career value

Programming deflation—driven by AI making code cheaper to write—creates a paradox: as tools improve, the incentive to delay work increases, yet experimentation costs approaching zero often win. Unlike traditional economic deflation, this productivity-driven abundance doesn't destroy value; it redirects it toward integration, judgment, and understanding what to build rather than writing code itself.

Read full essay on Substack ↗

How do you get AI coding assistants to give you honest performance comparisons instead of biased answers

When AI genies contradict themselves on performance benchmarks, you need structural incentives, not better prompts. Kent Beck separated the roles—one genie optimizes code, another independently audits it in an isolated environment—eliminating the conflict of interest that causes AI to rationalize poor results. This approach treats multi-agent AI like game theory: separated actors can't collude or fudge measurements.

Read full essay on Substack ↗

How to scale software design patterns as team size grows

Design patterns and practices that work for solo developers or small teams often break at scale. Switching scale requires rethinking communication, ownership, and decision-making structures — not just adding more people or processes. Kent Beck explores when and how to fundamentally shift your architectural approach as constraints change.

Read full essay on Substack ↗

How should development tools evolve when AI generates most of your code?

The IDE optimized for manual typing—careful navigation, syntax checking, incremental edits. But with AI pair programming, your bottleneck shifts from writing to reviewing generated code. Tools need to redesign around that new workflow: better diffs, faster validation, clearer intent-matching—not auto-completion and syntax highlighting.

Read full essay on Substack ↗

Why AI coding assistants repeat the same mistakes in loops

AI pair programmers can get trapped in unproductive cycles, generating similar flawed solutions repeatedly without breaking the pattern. Understanding when an AI genie is stuck—and how to interrupt that loop—is critical for effective AI-augmented development.

Read full essay on Substack ↗

How to grow a newsletter from passion project to sustainable business

Kent Beck shares the operational and financial decisions behind scaling his newsletter from a side project to a business that supports his writing full-time. This covers audience growth strategy, sponsorship models, and the trade-offs between editorial independence and revenue sustainability—practical insights for engineers considering their own platforms or publications.

Read full essay on Substack ↗

How AI pair programming agents compete to improve code quality

When multiple AI coding assistants propose different solutions to the same problem, their competition surfaces better design decisions than any single agent alone. Kent Beck explores how directing generative AI tools against each other—rather than accepting the first suggestion—reveals trade-offs in readability, performance, and maintainability that senior engineers would catch in code review.

Read full essay on Substack ↗

How do expert engineers learn unfamiliar technologies without getting stuck or burning out

Learning new tools, languages, and paradigms is constant in engineering—but most developers either freeze up or push through to exhaustion. The best explorers distinguish between productive confusion and being totally lost, recognize the delicate moment before understanding clicks, and know when to step away. Self-awareness across these phases is what separates effective learners from those who burn out chasing shiny tools.

Read full essay on Substack ↗

Reduce development environment state and irreversibility with cloud-based setups

Local development environments fail unpredictably because they combine high variability, interconnection, state complexity, and irreversibility—a model from economics that explains why systems become uncontrollable. Cloud development environments (like Gitpod) solve this by providing identical, reproducible state for all developers and enabling instant rollback to known-good configurations, eliminating the maintenance tax that consumes tens of percent of engineering time.

Read full essay on Substack ↗

How to identify and eliminate bottlenecks in product delivery systems

Software delivery is a chain of pipes: product, design, engineering, operations. The output is limited by the narrowest bottleneck, not the capacity of individual teams. As an executive, your unique vantage point lets you identify constraints others can't see, then systematically expand only the bottleneck while reducing upstream work—a lever individual contributors lack.

Read full essay on Substack ↗

How to build sustainable software practice without burnout

Sustainable software engineering isn't about heroic effort—it's about creating conditions where you can do your best work consistently. Kent Beck explores what it means to have a "place" in your practice: a foundation of rest, boundaries, and intentional design that lets you ship quality code without sacrificing your wellbeing.

Read full essay on Substack ↗

How does pair programming change code tidying and refactoring decisions

Pair programming fundamentally shifts when and how you refactor code. Working together makes tidying a social practice rather than a solitary one, creating opportunities to align on design intent while the code is still fresh, and distributing the cognitive load of maintaining consistency across a team.

Read full essay on Substack ↗

How does software design theory guide practical refactoring decisions

Theory grounds refactoring decisions in principles rather than preference, helping senior engineers distinguish between cosmetic tidying and structural improvements that reduce future cost. Beck explores how explicit design theory prevents endless debate about code style and focuses effort on changes that matter.

Read full essay on Substack ↗

How should engineering teams decide when to refactor code together

Refactoring is most effective when done collectively rather than individually, because shared tidying decisions prevent divergent code styles and build team consensus on quality standards. Kent Beck explores how synchronized tidying—especially in the "management section" of refactoring work—strengthens both code and team dynamics.

Read full essay on Substack ↗

How to manage API limits and token costs during hypergrowth

When demand for your AI product explodes faster than your infrastructure can handle, the game shifts from optimization to pure survival. You have two levers—increase supply (servers, API accounts, providers) or decrease demand (kill features, gate users)—and days to choose, not months. Capital stops being your constraint; tokens do.

Read full essay on Substack ↗

How do teams maintain code quality while balancing competing priorities and perspectives

Teams grow software successfully not by forcing alignment, but by establishing shared practices that accommodate different goals and incentives. Tidy Together explores how collective code stewardship—through consistent refactoring, clear communication, and mutual accountability—creates an environment where senior engineers, junior engineers, and business stakeholders can work toward sustainable systems despite their differing perspectives.

Read full essay on Substack ↗

How to separate useful feedback from someone's projection of their own fears

Not all criticism deserves equal weight. Kent Beck's First Feedback Filter teaches you to distinguish between feedback about your actual work and feedback that reveals the giver's biases, anxieties, or blind spots—especially useful when receiving emotionally charged input on controversial topics like AI-augmented coding or XP practices. The core move: pause before responding, identify what's actually about you versus what's about the feedback-giver's fears, and weight your response accordingly.

Read full essay on Substack ↗

How code tidying affects business sustainability and team optionality

Tidying code isn't just about aesthetics—it directly impacts a team's ability to respond to market changes and maintain cash flow. By keeping your codebase in a state where you can quickly pivot or scale, you preserve the options that let a business survive uncertainty.

Read full essay on Substack ↗

How AI-assisted coding differs from prompt-based generation in practice

Augmented coding—where you actively shape AI output toward tidy, tested code—differs fundamentally from "vibe coding," where you chase fixes in a loop. Kent Beck built a production-ready B+ Tree library in Rust and Python by treating the AI as a junior engineer to direct, not a magic box, catching warning signs like unexpected loops, unrequested features, and disabled tests.

Read full essay on Substack ↗

When context switching hurts productivity versus when parallel work improves it

Multi-tasking in software teams isn't binary—the cost depends on task type, team structure, and cognitive load. Kent Beck examines when context switching genuinely damages throughput versus when working on multiple parallel streams (waiting for feedback, unblocked work) actually accelerates delivery without sacrificing quality.

Read full essay on Substack ↗

Design features and options to scale without explosion of complexity

Feature flags and option parameters can accelerate shipping but create exponential complexity if not designed carefully. Beck revisits how to structure features and options so they compose cleanly, separate concerns, and let teams add capability without drowning in conditional logic.

Read full essay on Substack ↗

How AI coding assistants change the practice of test-driven development

AI pair programmers shift TDD from a discipline you impose on yourself to a conversation you have with your tools. When an AI can generate tests and implementations in tandem, the bottleneck moves from writing code to deciding what code should do — and whether it actually does it.

Read full essay on Substack ↗

How cognitive decline affects your identity as a software engineer

Kent Beck shares his experience with unexplained neurological symptoms that degraded his memory, focus, and ability to handle complexity—and what he learned about separating self-worth from raw brain power. The essay explores how constraints force different kinds of problem-solving, and why sustainable engineering matters when your cognitive capacity isn't guaranteed.

Read full essay on Substack ↗

Detect duplicate code patterns AI assistants generate during pair programming sessions

AI coding assistants excel at rapid implementation but often miss design opportunities that duplicate logic across your codebase. A copy/paste detector acts as a secondary check during AI-augmented development, surfacing violations of DRY principles that would normally require manual code review—letting you preserve velocity while maintaining design coherence.

Read full essay on Substack ↗

How to guide AI coding assistants toward functional programming patterns

AI pair programmers default to imperative code, but you can steer them toward functional style through careful prompting and constraint. Kent Beck demonstrates how to get Claude or similar LLMs to generate idiomatic Rust using composition, immutability, and pure functions—and why it matters for maintainability.

Read full essay on Substack ↗

Generalizing test cases to discover design patterns in TDD

The generalize step in test-driven development reveals design patterns by moving from specific test cases to abstract solutions. Rather than writing code that merely passes concrete tests, intentional generalization surfaces reusable abstractions that improve maintainability and reduce future rework—a core practice in Canon TDD.

Read full essay on Substack ↗

How to keep AI coding agents from adding unnecessary complexity when translating algorithms

When AI pair programming starts adding complexity faster than you can control it, stepping back to a simpler implementation language first—then translating mechanically—can break the cycle. Kent Beck discovered this building a B+ Tree in Rust: the language's ownership constraints plus algorithmic complexity compounded until the AI couldn't proceed. Writing the same data structure in Python first, then having an autonomous agent translate it test-by-test into idiomatic Rust, produced cleaner code in hours.

Read full essay on Substack ↗

Copy language design patterns from simpler languages to escape accidental complexity

When a language accumulates features, studying simpler languages reveals which patterns solve core problems elegantly—and which are accidental complexity you can eliminate or redesign. Beck explores how borrowing from minimal languages can help teams escape the tar pit of their current language's bloat.

Read full essay on Substack ↗

How AI pair programming accelerates learning without sacrificing production speed

Legitimate peripheral participation—learning by doing real work alongside an expert—explains why AI-augmented coding accelerates skill acquisition in unfamiliar languages. You gain confidence tackling messy production code, not toy problems, while the AI handles volume, letting you focus on design and language patterns. This inverts the usual tradeoff: you learn faster precisely because you're shipping, not studying in isolation.

Read full essay on Substack ↗

Why LLMs generate unnecessary design patterns and how to fix it

LLMs trained on static code snapshots learn to replicate patterns—factories, registries, interfaces—even when they add no value to small systems. Training models on diffs and incremental changes instead would teach them when complexity becomes essential, and equip them to sequence safe refactorings and behavior changes like expert programmers do.

Read full essay on Substack ↗

Which AI coding assistant should you use in 2025 and why

Kent Beck evaluates five AI coding tools (Augment Code, Cursor, GitHub Copilot, Claude Code, Roo Code) and finds meaningful day-to-day performance differences between them. Rather than committing to one vendor, he recommends exploring multiple tools in parallel—the landscape is changing too fast to lock in, and each tool has distinct tradeoffs in context awareness, refactoring support, and unwanted code generation behavior.

Read full essay on Substack ↗

How do artists and engineers approach creative problem-solving differently

Kent Beck explores the intersection of artistic and engineering thinking at the Thinkie World Congress, examining how creative disciplines inform technical decision-making. This session bridges abstract problem-solving methods from art with the concrete constraints engineers face in software design.

Read full essay on Substack ↗

How to guide AI coding assistants toward better code suggestions consistently

AI pair programming requires active prompt engineering to prevent hallucinations and low-quality suggestions. Rather than fighting bad outputs reactively, senior engineers can structure persistent prompts—context, constraints, and quality gates—that nudge AI toward generating code worth keeping, making the collaboration productive instead of exhausting.

Read full essay on Substack ↗

How AI coding assistants hit complexity cliffs when refactoring — and what humans do better

AI-augmented coding excels at incremental changes but fails when large architectural shifts require coexisting implementations. Kent Beck demonstrates how a parallel refactoring strategy—maintaining both old and new code paths simultaneously while tests pass at every step—keeps systems stable during major design changes that would otherwise trap an AI assistant in a complexity cliff or force it to delete tests and fake implementations.

Read full essay on Substack ↗

How AI pair programming changes exploration versus production code phases

AI-augmented coding excels during exploration—rapidly testing ideas and discovering design directions—but falls short in expansion and extraction phases where consistency and intentionality matter most. Understanding where generative tools add genuine value prevents over-relying on them for work requiring human judgment and architectural coherence.

Read full essay on Substack ↗