There is a lot of conversation about the impact of AI on thought work. A lot of it is sensationalized. Here are a few common tropes I have seen and heard this past year:

We are downsizing because we are AI-first

AI is going to make everyone developers

AI is taking software jobs

AI code is garbage

If I see you using AI in your communication, you are blocked

I have seen each of these in various forms both in a tone of idyllic hope (“A bright new day is upon us!”) and despondent dystopia (“The end is at hand. Take up farming while you still can”). Sentiment aside, these headlines never struck me as effectively anchoring the phenomenology of the rise of ai in pop-tech.

Make no mistake. There is a transformation taking place in technology. I have deeply immersed myself for the past two years and continue to be struck how deeply impactful it is in nearly every facet of business, technology, and personal life. But I hesitate at the notion that AI is replacing a person or industry. These headlines strike me as sensational designed to prey on fears for viewership.

If these headlines don’t capture what’s happening, what’s a better orientation? The following has helped me anchor the many competing narratives in an increasingly crowded space: AI is a tool. That’s it. A tool.

AI as a Tool

First, let’s be clear. AI is a powerful tool. It is the current culmination of the internet age: a massive probability engine built off of the information we have created, shared, and connected the past two generations. It is the combination of hardware availability, algorithmic definition, infrastructure installation, and content production that make the emergence of LLMs so powerful. Its context models and customizations (prompt engineering, tool use i.e. MCP, RAG) enable sub-specializations: coding, communication, business strategy, and lifestyle.

Its generative properties support movement from assistant to agent. It’s not just a search tool; tasks can be delegated.

It is difficult to relate the paradigm shift that is taking place; however, I would argue there are a few:

  • Electronic spreadsheets
  • Internet connected systems
  • Managed memory and high-level languages

Underlying each of these is an explosion of software design patterns, architectural innovation, and transformation. The electronic spreadsheet, introduced in the 70s, was going to replace the analyst. In fact, the field grew as the tool supported a broader range of functions than ledge management. The internet opened up an entire field of distributed systems designs, non-relational databases, and eventually consistent models for state management. High level languages, once considered too slow to effectively run enterprise production systems (yes, I date myself with this statement), became a force multiplier: developers no longer needed to build and manage their own memory tables. Their focus shifted toward business applications (and the rise of platform engineering).

<talk about ai as a value producer>

Let’s be clear what AI is not:

  • Magic
  • A human

AI can imitate human behavior (the ultimate copy cat); it can surprise us with the connections it draws from our prompts (I continue to be amazed). Our amazement and sense of magic comes from the context pool that the probability engine is drawing from: the internet. My wonder has shifted from the “magic of LLM” to the a sense of wonder just how connected we are in our human experience. Our unique problems we discover through prompting are more common than we originally thought.

Building on AI as a Tool

If AI is “just a tool” they should help us navigate some of the popular headlines we are seeing in the community. Let’s explore each of these.

We are downsizing because we are AI-first

We start by taking as assumed that AI increases value production in an organization.

Let’s break this one down because it’s nuanced. Downsizing is a mechanism of managing bottom line. It’s part of the fundamental equation of business profitability: boost top line as much as possible while keeping bottom line as small as necessary. :mind-blown:

If a company is reducing headcount “because of AI” it means that they have determined that they their current top line acceleration curve is not boosted by additional headcount. AI acceleration displaces current bottom personnel spend and the equation is best balanced by reducing investment. Velocity is maintained, bottom line is reduced. The company wins.

Consider an alternative scenario: a company’s top line is growing rapidly and needs engineering acceleration. The company fully adopts ai. Ahead of personnel hiring (the most expensive part of the business), value production accelerates and bottom line flattens in a growth stage. The company wins.

Note that whether the company is flat, growing, or declining, a forward-leaning business should adopt AI first. Personnel decisions are entirely a factor or the stage of the company (growth, flattened, declining).

AI is going to make everyone developers

First off, I hope so! It’s a great skillset to have and one that I believe is quite satisfying. However, underlying this is often a fear that intellectual capital is being lost by developers with the rise of AI.

Peter Norvig (former head of research at Google) writes about expertise in his article Teach Yourself Programming in Ten Years. In it, he reacts to a (now dated) set of publications that advertised you could teach yourself a language in 20 days. He notes in this that programming is not learned in a month. It’s learned across years of experience building systems, observing system behavior, understanding the impact of programming techniques, program execution, design patterns, problem decomposition, pattern recognition.

Similarly, AI can give thousands of lines of code to anyone. What it doesn’t do is build fast pattern recognition when to tightly couple v modularize, break into n-tier or microservices, construct a generator function IOC pattern v a coupled, imperatival flow. It can explain each of those, it can generate those (with a tendency toward eager design patterns), but it can’t (yet) govern an enterprise system.

Neither can every human. I can ask ai to build me a marketing strategy and for about 30 seconds I might look smart parroting it. But I won’t have a nuanced approach to its application and rollout, a recognition when it’s hallucinating or providing a sound approach. That is built by hard work and human specialization.

Similarly, AI won’t make everyone programmers. AI WILL flatten the barrier to entry for those wanting to build up a programming skillset. And I herald this as an important step forward.

AI is taking software jobs

Highly doubtful. AI is a tool and, as a tool, it needs people who employ it well. An excavator can move tons of material in almost no time. But not everyone knows how to effectively use it. I could jump into one and start moving knobs and buttons and I would probably cause a lot of damage.

AI is similar. It moves thousands of lines of code in an afternoon. But whether that moves the business or enterprise code base forward or backward is in the hands of the engineer.

What is clear is that AI is shifting the relationship of the software engineer to the enterprise system. The AI-enabled engineer spends more time architecting than coding. The process moves toward incremental architectural surgery: examination of the system, small additions & refactors that move the system closer to its intended goal.

A better way to express this is: “Engineers who effectively use AI in their work are replacing engineers who do not.”

AI code is garbage

I chuckle at this because it’s somewhat true. But not ipso facto. AI agent-based coding has improved significantly in the past two years. It will continue to improve because the financial incentives are present. Currently, as of writing, agent-based coding will tend toward weighting global context (all code on the internet) with local codebase. That means that AI has a tendency to introduce new design patterns to accomplish a goal. If given enough free reign, it creates a mess in which multiple design patterns are present, code is repeated across the codebase, and maintainability drops. Many times I have had to roll back an agent because it went rogue and ignored clear patterns established in the code base.

On the other hand, in moments when I realized I needed to refactor a codebase, build tests, add new modules, it completed the task effectively in minutes. It would have taken me an afternoon; sometimes longer. I have significantly more of these stories than the former.

How do we interpret this? The same as the previous point. How we use this tool governs how effective we are in our work. I can easily show how poorly AI performs: ambiguous prompt (goal), wide latitude and little to no review. Give me 10 minutes and I can get a post that shows off this dumpster fire.

But, on the other hand, I can show how effectively AI performs: clear goal, scoped, local context provided, system prompts governing design patterns, and review after each micro-prompt. Notice: I’m still developing, I’m steering closely the system. But now I’m moving 10x (really more – in my experience it ranges between 2x – 30x based on the task).

If I see you using AI in your communication, you are blocked

This one is especially strange. Corollaries of this include, “if I see you are using ai-generated code, you have lost credibility as a developer” or similar. I think of other tools where this would sound ridiculous.

  • If I see you used a spell checker in your communication, you are blocked. Grammarly? Even worse.
  • If I notice you using Java’s garbage collection rather than building your own, you aren’t a developer
  • If you are using a heavy-weight IDE, you aren’t really a developer

Why is it strange? At face value, it signals that high leverage support invalidates you or your work. It’s akin to innovation bullying.

Also, there are many inbounds I get where I wish AI had cleaned it up!

What if instead we treated signs of ai (i.e. the now infamous emlaut) as recognition that the content producer was using support tooling to craft good code, communication, and systems? If we did that, we would quickly go back to the ai-as-tool evaluation: how effectively if the person using AI? How well does the person using ai in this particular discipline understand the content materials? That’s what ought to block a person, so to speak.

What’s Next?

I believe thinking of AI as a tool is the most anchoring model to understand what’s taking place. It protects from sensationalism and shifts thinking toward more importa problem solving: governance and culture.

There is a significant paradigm shift taking place in how software is produced. The field itself is growing rapidly. Engineers and leaders are learning how to use it effectively in their work and their workplaces.

As organization manage this transformation, the most important areas determining success or failure are governance (how to use it, how to protect from harm) and culture (how to embrace change without breaking apart our company’s people systems). Thought leaders and change agents in this space are critical in this next decade.

Leave a comment