📣 Turn QA data into actionable intelligence! Learn more about LeoInsights™.

  • Become a Tester
  • Sign in
  • The Testlio Advantage
    • Why We Are Different

      See what makes Testlio the leading choice for enterprises.

    • Our Solutions

      A breakdown of our core QA services and capabilities.

    • The Testlio Community

      Learn about our curated community of expert testers.

    • Our Platform

      Dive into the technology behind Testlio’s testing engine.

    • LeoAI Engine™

      Meet the proprietary intelligence technology that powers our platform.

    • Become a Partner

      Explore how you can partner with us through TAPP.

    • Why Crowdsourced Testing?

      Discover how our managed model drives quality at scale.

  • Our Solutions
    • By Capability
      • Manual Testing
      • Payments Testing
      • AI Testing
      • Functional Testing
      • Regression Testing
      • Accessibility Testing
      • Localization Testing
      • Customer Journey Testing
      • Usability Testing
    • By Technology
      • Mobile App Testing
      • Web Testing
      • Location Testing
      • Stream Testing
      • Device Testing
      • Voice Testing
    • By Industry
      • Commerce & Retail
      • Finance & Banking
      • Health & Wellness
      • Media & Entertainment
      • Learning & Education
      • Mobility & Travel
      • Software & Services
    • By Job Function
      • Engineering
      • QA Teams
      • Product Teams
  • Resources
    • Blog

      Insights, trends, and expert perspectives on modern software testing.

    • Webinars & Events

      Live and on-demand sessions with QA leaders and product experts.

    • Case Studies

      Real-world examples of how Testlio helps teams deliver quality at scale.

Contact sales
Contact sales

AI, Three Years In: Speed Is Easy. Trust Is Not.

In November 2023, I flew to Germany to speak at Agile Testing Days, one of the leading software testing conferences. My keynote was called “10x Software Testing.” I knew there would be skepticism in the room.

Kristel Kruustük , Kristel is Testlio's co-founder.
January 30th, 2026

At one point, I asked a simple question:

“How many of you have experimented with AI tools in your work?”

Roughly half of the audience raised their hands. The other half hadn’t. That didn’t surprise me.

At the time, AI still felt optional. Experimental. Interesting, but not essential. Some AI models still spelled “strawberry” wrong (some still do!), so the hesitation made sense. Yet, no one could deny that the possibilities were literally endless. 

In my talk, I shared some of the experiments we had been running and said something that has since become almost common wisdom:

You either use AI, or you will be replaced by someone who does.

Fast forward to today, and almost everyone is absorbed by AI. The shift has been fast and dramatic. And while I’ve been in this industry long enough to see hype cycles come and go, this one clearly isn’t hype.

As AI becomes infrastructure, the cost of getting it wrong grows. Staying silent feels easier, but it also feels like the wrong choice.

Three years is not a long time

If you think about it, it’s been just over three years since ChatGPT made its way into our lives.

That’s not nearly long enough for most of us to even fully come to terms with the consequences of the decisions we’ve made since. To put it in perspective, for most companies, this is when things finally start working, and scale finally kicks in.

Like many others, I started with curiosity, excitement, and a healthy dose of skepticism. The productivity gains looked real. The possibilities, as I shared earlier, felt endless. AI moved quickly from “interesting” to “everywhere.”

But over the past year, I’ve found myself pausing more often. Not because AI isn’t impressive, but because of the patterns that are starting to repeat. 

Think about the early days in car manufacturing. While manufacturers prioritized speed and market share, it was the real people behind the wheel who dealt with the consequences of safety being an afterthought. In some ways, AI adoption feels a lot like that. Governance and guardrails are still lagging behind, while innovation hasn’t stopped. 

Where the discomfort is coming from

For me, one of the things that stands out most is how invasive AI is becoming when it comes to privacy.

We talk a lot about empowerment and efficiency, but not nearly enough about how AI has quietly started to influence our decisions. Recent product announcements, like OpenAI for Healthcare or those we saw at CES, point to a future in which AI isn’t a background actor anymore. It is increasingly embedded in moments that involve sensitive data, personal judgment, and real-world consequences.

Yet, no one is asking the hard question. Where do we draw the line?

Nowhere is this tension more visible than in healthcare, where the current wave of adoption is already being described as the AI healthcare gold rush. Even the most skilled healthcare professionals, despite years of education, rigorous certification, and clinical experience, rely on second opinions and layered checks. If they are required to know their limits, AI systems that influence similar decisions must be built with the same expectations.

Another concerning pattern is the relentless focus on speed. Faster models. Faster releases. Faster adoption. Speed is celebrated. Restraint is not. 

That raises yet another uncomfortable question: are we actually moving fast in the areas that matter most?

Organisations like the Future of Life Institute have been vocal about the risks of developing AI without clear guardrails. This is not about abstract ethics debates. It’s about very real consequences when systems scale without enough thought given to failure modes, misuse, or unintended impact.

A comment that stuck with me recently came from our CRO, Dean. He predicts that a major AI-related incident will happen in 2026, forcing companies to rethink their safety assumptions almost overnight.

That may sound dramatic. But trust is fragile.

We are sharing enormous amounts of personal and company information with generative tools every day. It only takes one highly visible incident for that trust to disappear.

A quiet signal from the insurance world

And we are not alone in our assumption that AI without guardrails is a recipe for disaster. Many insurance companies are now explicitly excluding AI-related incidents from their policies. For them, hype isn’t enough to look past the growing risk concerns. 

To get coverage, companies need to qualify through audits, controls, and evidence of responsible use. It signals that the consequences of AI are still poorly understood, and that alone should make leaders pause.

What actually scales

Right now, you’re probably thinking, “Okay, Kristel, you’ve scared us enough, but we can’t really stop adopting AI.” I’m not advocating that you stop innovating. Instead, I’m asking you to consider what you would need to balance both innovation and safety without losing scale. 

So, I’ll share something that resonated with me recently that could maybe help. At Davos, one of the co-founders of Shazam, said that the focus shouldn’t be on abstract ethics, but on how trust gets operationalised at scale. 

Certification schemes. Audits. Benchmarks. Standards. The unglamorous and often overlooked infrastructure.

From his time at Shazam and through backing AI-driven companies, he has seen the same pattern repeat. The technologies that endure are the ones that build safety into their growth, not the ones that treat it as an afterthought.

Yes, speed looks impressive, and scale is exciting, but automation without guardrails is dangerous.

The next chapter of AI won’t be won by the models that do the most. It will be won by the ones that know when not to act, when to ask for confirmation, and when to stay out of the decision entirely.

Where I’m leaning in

At Testlio, we recently launched LeoInsights™ as part of our LeoAI Engine™. It’s a small step, and intentionally so.

For me, this work isn’t about chasing AI for its own sake. It’s about understanding where it adds real value, where it introduces real risk, and how we build systems people can actually trust.

Testing, auditing, and accountability may not sound exciting, but they are the foundations that allow technology to scale responsibly.

An open question

I don’t have all the answers. I’m still learning, observing, and challenging my own assumptions as I go deeper into the AI space.

But I do believe this:
The conversation needs to shift, and it needs to shift fast.
Not away from innovation, but toward responsibility.
Not away from speed, but toward trust.

So, let’s go back to a question I raised earlier. Where do you think the line should be drawn?

People worth following on quality and AI

As I close this out, I’m going to leave you with a list of people I’ve been following closely to learn more about how quality, responsibility, and AI intersect:

  • Luiza Jarovsky – thoughtful perspectives on quality, risk, and systems thinking
  • Karen Hao – sharp analysis on AI power dynamics and accountability
  • Kevin Henrikson – entrepreneur behind Pretty Good AI, with a grounded take on building companies and transforming industries.

You may also like

  • Perspectives What Do We Really Mean by “AI Testing”?
  • Perspectives The Missing Discipline in AI QA: Verifying “System Prompts,” Not Just User Prompts
  • Perspectives The Rise of Identity-Verified AI Agents, And the New QA Reality
  • Perspectives The New Era of AI Testing Careers: How Roles, Skills, and Opportunities Will Evolve in 2026
  • LinkedIn
Company
  • About Testlio
  • Leadership Team
  • News
  • Partnerships
  • Careers
  • Join the Community
  • Platform Login
  • Contact Us
Resources
  • Blog
  • Webinars & Events
  • Case Studies
Legal
  • Notices
  • Privacy Policy
  • Terms of Use
  • Modern Slavery Policy
  • Trust Center

Subscribe
to our newsletter