When a login button breaks, you know it. The bug is reproducible, the fix is straightforward, and the path from problem to resolution is clear.
Who would have imagined, five years ago, that staying relevant in QA in 2026 might involve… poetry? Not writing it for fun, but kind of weaponizing it.
In November 2023, I flew to Germany to speak at Agile Testing Days, one of the leading software testing conferences. My keynote was called “10x Software Testing.” I knew there would be skepticism in the room.
Most product teams today are very good at one thing: testing what happens when a user types a prompt.
For a long time, we spoke about “AI agents” like they were a future concept, something that might eventually book flights, run workflows, or make payments on our behalf.
AI testing careers are shifting in ways that most people in QA are not fully prepared for, and the changes are creating opportunities that did not exist even a few years ago.
AI doesn’t just learn from data, it learns from us, and we are far from perfect. When it scrapes the internet for knowledge, it also absorbs our biases, blind spots, and noise, shaping how it interprets the world.
For years, QA practices were designed for predictable, rules-based software. AI has upended that reality by introducing risks that traditional methods cannot fully address.