Khmer Kbach decorative pattern
Skip to main content

Those who like to build

8 min read
AI & ToolsCareer & CultureSeries
Khmer decorative icon
Khmer decorative icon

This is part of an ongoing series on what it means to be a software engineer in the age of AI.

I just got back from India. A week and a half with the new team that would become Lytics' counterpart there. Finally met the QA engineers I'd been working with remotely — the ones who'd been using the testing framework I built over the summer, adding to it, making it theirs.

On the flight home, I found myself thinking about how we got here.

In April, something shifted at work.

People started sharing their AI workflows — how they were using Claude, Copilot, Cursor. Not theoretical discussions about whether AI would change things. Practical: here's what I built this week, here's how.

The energy was different. Not hype. Curiosity. What if we actually leaned into this?

In one of my bi-weekly one-on-ones with Mark, my manager, he said something that stuck with me: "Those who like to build will thrive. They'll feel so enabled, they won't be able to stop themselves from pushing further."

I'd heard variations before. AI will change everything. The future belongs to the adaptable. But this was different. Not about survival — about possibility. Not "learn AI or get left behind" — if you love building, you're about to find out how much more you can do.

Something clicked. I took a leap and never looked back.

What "liking to build" means

Building isn't the same as coding.

I've known engineers who are brilliant at coding but don't build. They optimize. They refactor. They review. Important work. But they wait for someone else to define what needs to exist.

Building is the itch to make something exist that didn't before. To see friction and think: I could fix that. Not "someone should fix that." I could.

For years, that itch was bottlenecked by implementation details. You had an idea, but turning it into reality meant learning frameworks, debugging config, writing boilerplate. The gap between "I want this to exist" and "it exists" was weeks. Sometimes months.

AI collapsed that gap.

The first experiment

April 9th, I created an internal E2E testing framework for our product. (The shared patterns later became playwright-core, which we open-sourced.)

I'd been thinking about test automation for months. We had scattered tests, inconsistent patterns, no visibility into what was covered. The problem was clear. The solution was clear. But building it meant weeks of setup, learning Playwright patterns, writing boilerplate.

With Claude, it took two weeks.

Not two weeks of fighting config. Two weeks of building. The AI handled the boilerplate. I focused on the architecture — the patterns that would make tests maintainable, the conventions that would make the framework usable by the whole team.

Learning to work with AI

The first version was rough. I'd paste code into Claude, ask for fixes, paste the output back. It worked, but it was clunky.

I started noticing patterns. Claude made better suggestions when it understood the context. So I added a CLAUDE.md file — a guide for Claude explaining the project structure, the commands, the architecture.

Then I noticed Cursor had its own context system. I added .cursorrules — more specific instructions for how to write code in this repo.

Then I got more ambitious. What if AI could generate entire test files, following our conventions automatically?

I led a session with the team to document our patterns. We came up with a PROMPT.md — a guide for how we write tests. I took that foundation and built it into docs/ai/PROMPT.md — an authoritative guide for AI-assisted test generation. Not just "here are the commands" but "here's how to think like a contributor to this repo."

# Core Principles
- POM-only: No raw selectors in specs
- Annotations: Each test must include testSuiteName, journeyId, testCaseId
- Steps: Use sparingly to group meaningful phases

# AI Self-Check Before Returning Output
- [ ] Correct file path
- [ ] POM-only (no raw selectors)
- [ ] Annotations present
- [ ] Tags included

The workflow became: Playwright codegen → paste into Cursor → AI refactors into our patterns → drop into scaffold.

I wasn't coding less. I was building more.

The dashboard

By June, we had tests. But no visibility. Test results lived in CI logs. You'd have to dig through runs to understand what was passing, what was failing, what was flaky.

So I built a dashboard.

An internal dashboard — a real-time view into test health. Pass rates, trends, coverage gaps. The test framework fed data to Firestore; the dashboard visualized it.

Another two weeks. Another thing that existed because the implementation tax had dropped.

The numbers

I didn't track my output at first. I was just building.

But when I looked back at the summer:

Month20242025
April2877
May40101
June33239
July61253
August44299

Same person. Same job. Same hours. Different output.

August alone was nearly 7x what I did in the same month last year. And I wasn't working harder — I was just building more.

What I was actually learning

The tools were part of it. But the bigger shift was in how I thought about work.

Before: Can I build this? The answer determined whether I'd try.

After: Should I build this? If yes, the implementation would follow.

The bottleneck moved. It wasn't "do I have the skills to build this?" It was "do I have the judgment to know what's worth building?"

That's a different question. A product question. A builder's question.

"Those who like to build"

Sitting on that flight from India, watching the QA team's commits roll in on my phone, Mark's words made more sense.

AI doesn't make everyone a builder. It reveals who the builders always were.

The people who noticed friction. The people who couldn't stop themselves from fixing things. The people who had ideas but were bottlenecked by implementation.

Now they're free.

If you like to build — if you have the itch — AI is the best thing that's happened to you. The gap between "I want this" and "it exists" has never been smaller.

If you don't have that itch... AI won't give it to you. It just makes the gap more visible.


This is part of an ongoing series on what it means to be a software engineer in the age of AI.

Khmer decorative icon
Related Project

playwright-core

Shared Playwright testing patterns extracted from internal E2E framework. Used daily by a 4-person QA team across two products.

TypeScriptPlaywrightTestingOpen Source
Khmer decorative icon