I've been DJing for over two decades. Not the kind where you show up with a USB stick and a pre-planned setlist and press play — the kind where you learn to beatmatch by ear on vinyl, where you develop an intuitive sense for phrasing and energy because the equipment won't do it for you, where your mistakes are audible and immediate and happening in front of a room full of people who paid to be there.
I've also spent my career in enterprise IT and systems architecture. I've designed infrastructure at scale, debugged production failures at 3am, built automation platforms, and architected AI workflows. I'm not a traditional software engineer — I don't write Python or JavaScript from memory. But I understand systems deeply enough to design them, specify them, and diagnose them when they break. And increasingly, I use AI tools to bridge the gap between what I can architect and what I can implement in code. Which means I know firsthand, from both sides of the dependency, exactly how powerful and how dangerous that crutch can be.
I've been laid off twice in two years. I've sat through five rounds of technical interviews only to get a rejection email. I've watched the industry I've given my professional life to undergo a transformation that most people are celebrating and I find deeply unsettling.
Because I've seen this exact transformation before. I watched it happen to DJing.
Software DJ tools like Traktor had offered sync functionality for years, but it lived on laptops and controllers — tools the purists could dismiss as toys. That changed in 2012, when Pioneer released the CDJ-2000 Nexus with Beat Sync built in. The industry-standard hardware — the same decks installed in every serious club on the planet — now had a button that would automatically align the tempo of two tracks. Beatmatching — the foundational skill of DJing, the thing you spent months in your bedroom learning before you ever played in public — became optional on the very equipment that defined professional performance.
The reaction from the existing DJ community was predictable. Purists screamed about the death of the art form. Newer DJs said the old guard was gatekeeping. The industry settled on a comfortable narrative: sync is just a tool, it frees you up to focus on track selection and creativity, the audience doesn't care how you mix as long as it sounds good.
And that narrative was partially true. Sync is a tool. Track selection does matter more than technical mixing in many contexts. Most audiences genuinely cannot tell the difference.
But here's what actually happened over the next decade-plus: an entire generation of DJs emerged who could perform under ideal conditions but had no idea what to do when conditions stopped being ideal. CDJs freeze. Software crashes. You're playing a track that has a tempo drift the algorithm can't handle. The monitors cut out and you can't hear what's coming through the headphones clearly. These aren't hypotheticals — they're Tuesday night at any club.
I still bring backup media to every gig. Vinyl, CDs, a USB stick — because I came up on unreliable equipment where any piece of the chain could fail at any moment, and you had to keep the music going regardless. To this day, when I play out, I hear younger DJs react with genuine amazement that I can beatmatch by ear without looking at the screen. That used to be the bare minimum. Now it's a spectacle. That shift tells you everything you need to know about what happened to the skill floor in this industry.
The DJs who learned to beatmatch by ear adapted. They had a foundational understanding of what was actually happening — tempo relationships, phase alignment, harmonic structure — so when the tool failed, they still had the skill underneath. The DJs who learned on sync stood there staring at a loading screen, because they'd never developed a mental model of what the tool was doing for them. They could operate the interface. They didn't understand the system.
I think about this every time I watch a software engineer interact with GitHub Copilot.
The parallel is almost too clean. AI-assisted coding tools have done to software engineering what sync did to DJing. They've lowered the barrier to entry — which is genuinely good — while simultaneously creating a population of practitioners who can produce output without understanding the systems that output depends on.
An engineer who learned to code by writing code, debugging code, reading other people's code, and developing a mental model of how computers actually execute instructions can use Copilot as an accelerator. They know what the tool is suggesting and why. They can evaluate the output because they understand the domain. When the tool generates something subtly wrong — and it does, constantly — they catch it, because they have an internal reference frame that exists independently of the tool.
An engineer who learned to code primarily through AI assistance is in a fundamentally different position. They can produce working code at impressive speed. They can ship features. They look competent, the same way a sync DJ looks competent when the CDJs are behaving. But when something breaks in production and the error isn't something the AI has a clean answer for, when the debugging requires actually understanding memory allocation or race conditions or how a particular database engine optimizes queries — they're standing behind the decks with their hands at their sides.
This isn't theoretical. I spent my last job building observability infrastructure and debugging platform failures. I watched engineers struggle to troubleshoot systems they had built because they'd assembled them from suggested code blocks without developing a coherent understanding of how the pieces interacted. They could build. They couldn't diagnose. Those are two completely different skills, and the current toolchain develops one while atrophying the other.
And I say this with full honesty: I live in the gap between those two skills myself. I can architect a system, design the data flow, specify every component and how they interact. When it comes to writing the actual code, I lean on AI tools heavily. I am, in a very real sense, dependent on the crutch. The difference — and I think this is the critical difference — is that I know what the code is supposed to do before the tool generates it. I can evaluate the output against the architecture in my head. When the tool gives me something that looks right but is subtly wrong, I usually catch it, because I understand the system even if I didn't write the implementation by hand. But I'm not going to pretend I'm immune to the problem I'm describing. I've felt the floor disappear when the AI produces something I can't fully evaluate. That's what makes me take this seriously.
The culture around both disciplines has undergone the same mutation. In DJing, the measure of success shifted from craft to visibility. A DJ's value is increasingly determined by their Instagram following, their Boiler Room appearance, their brand collaborations — not by whether they can hold down a four-hour set and read a room. The resident DJ who plays every Friday, who knows the sound system intimately, who builds a night from zero energy to peak over the course of hours — that person is invisible compared to the touring DJ who plays a 90-minute set of pre-selected bangers to a phone-recording crowd.
In engineering, the same inversion happened. GitHub green squares. "I built this in a weekend" Twitter threads. Conference talks about tools you've used for three months. The performance of engineering competence has become more valued than engineering competence itself. The person who maintains critical infrastructure, who has deep institutional knowledge, who can debug a cascading failure across distributed systems — they're invisible compared to the engineer who shipped a flashy feature and wrote a blog post about it.
Both industries now have a discoverability problem that is really a quality assessment problem. The signals that the market uses to evaluate talent — follower counts, contribution graphs, interview performance, visible output — are weakly correlated with the actual ability to do the work under non-ideal conditions. And because the tools make it possible to generate impressive-looking output without deep understanding, the signal-to-noise ratio has cratered.
Which brings me to hiring, because this is where the damage is most tangible.
I recently went through five rounds of interviews for a role I was deeply qualified for. Technical challenge, multiple conversations, team chat, the full gauntlet. Rejected. I don't say this with bitterness — interviewing is probabilistic and companies have their own internal logic. But the experience crystallized something I've been thinking about for a long time.
Modern technical interviews are optimized to assess tool operation, not system comprehension. They test whether you can produce a correct output in a constrained environment, which is exactly the skill that AI-assisted coding has made trivially easy to fake. They don't — and largely can't — test whether you understand why the system works, what happens when it breaks in a way nobody anticipated, whether you can trace a problem across layers of abstraction from application logic down to infrastructure.
The DJ equivalent would be booking acts based entirely on a recorded mix — which might have been pre-planned, tempo-corrected, and edited in post — rather than watching them play live, on unfamiliar equipment, to an unfamiliar crowd. One tests preparation. The other tests understanding. The industry increasingly selects for preparation.
The result is predictable. Companies hire engineers who interview well. Some of them are genuinely excellent. Some of them are sync-button engineers who can operate under ideal conditions and will be lost the first time something truly unexpected happens in production. The interview process can't tell the difference, and the AI tools have made the surface-level performance nearly indistinguishable from the real thing.
I don't have a clean solution. I'm not arguing that AI tools are bad or that sync is bad or that accessibility is bad. Democratizing complex skills is a net positive. More people making music is good. More people writing software is good.
But we're lying to ourselves if we pretend that tool-assisted output is the same as tool-informed understanding. A DJ who uses sync as one capability among many — who understands tempo, phrasing, harmonic theory, room dynamics, and uses the tool to free up cognitive bandwidth for higher-order decisions — is a completely different practitioner than a DJ who uses sync because they never learned what it replaces. An engineer who uses Copilot to accelerate work they already understand is a completely different practitioner than an engineer who uses Copilot because they can't produce the work without it.
The distinction matters. It matters when CDJs crash. It matters when production systems fail. It matters when the comfortable conditions that make tool-dependent practitioners look competent suddenly disappear.
I know this because I've stood behind the decks when the display shut off, software crashed or a storage disk failed, and I kept the mix going by ear. I know this because I've been the person at 3am tracing a database issue through layers of infrastructure while the tool-dependent engineers waited for someone else to figure it out. And I know this because I use AI tools to code every day, and I've felt the difference between understanding what I'm building and hoping the tool understood it for me.
And I know this because I've been rejected by an interview process that couldn't tell the difference.