The limits of software testing spectacularly revealed by Apple

When the news hit this week that Apple’s new FaceTime group chat feature had a flaw that allowed someone to briefly eavesdrop on someone else, the tech (and mainstream) press was quick to jump on the story, with some mentioning it in the same breath as the nothing-even-remotely-like-it story about Facebook’s egregious privacy destroying, data vacuuming “research” app that flagrantly violated Apple’s enterprise program rules.

To me, the most disturbing piece of this story is what it reveals about the limits of software testing, and what it means for our ever-increasingly tech-driven world.

The FaceTime bug itself honestly made me shrug. While disturbing, it’s not as if the bug allowed someone to Hoover-up the ID and password to your online banking account, nor did it expose all your sexting episodes to your friends and family. Apple also jumped on it quickly by disabling FaceTime group chat until the bug could be fixed.

No, the truly disturbing thing here is what it shines a spotlight on, and that’s basically this:

  • Humans write software
  • Humans make mistakes, thus software has mistakes (bugs)
  • Humans cannot foresee every conceivable way that their mistakes can be revealed*

The sequence of operations needed to expose this bug are nothing a software developer, software tester, QA person, automated software testing script, or any other development tools or processes could ever be expected to find. It lies far enough outside any routine or “normal” set of steps or operations as to be essentially impossible to foresee. (I suppose that point is arguable, but it’s how I happen to see it.) And yet, the bug itself was colorfully laid bare in a way that can only be described as hugely embarrassing to Apple, a company known for its focus on privacy.

There are relatively common sorts of software failures we’ve learned about over the years and that software testing regimes are designed to unearth, things like invalid inputs, buffer or pointer overflows, SQL injection vulnerabilities, cross-site scripting, and all manner of other things. The more we make software, the more we learn how people tend to break it, and revise our development and testing practices accordingly.

But trying to foresee the exact steps required to reveal the FaceTime group chat bug, as I said, is nearly impossible, and things frequently become clear only in hindsight.

The problem is that these very sorts of bugs exist in all software, everywhere — bugs that require strange, hard-to-foresee sequences of inputs or steps, or that depend on a set of unexpected or unusual data, in order to come to light. They exist in software on which our national security depends. They exist in software that allows cars to drive themselves. They exist in software that runs utilities (power, water, etc.) on which we depend everyday. They are literally everywhere.

Thankfully, the majority (or we hope the majority) of the most mission-critical software applications have limited sets of users, who act in predictable ways, so things like the FaceTime group chat bug (which depends on weird user inputs) are less likely to occur. Or they exist in relative bubbles, closed environments with no direct end-user interaction, and limited points of access, like embedded software systems that control engine function in modern cars.

But bugs don’t depend on user input sequences or network access alone to be revealed; they can be triggered by myriad other unpredictable things, and therein lies the main concern.

It’s also why it’s nearly every day that we hear about a new security breach, or a new privacy breach, or some other software-related incident.

Should all this keep us up at night? Maybe. But with each new discovery, we humans learn more about the ways we can employ software to screw ourselves over, and as an optimist, I feel that by folding these discoveries back into our overall base of human knowledge means we just get better and better at what we do when it comes to all the software that runs everything today.

That doesn’t mean there won’t be more of these discoveries — and embarrassments; there’ll be plenty more to come. Let’s just hope they’re on the general scale of the FaceTime group chat bug: frustrating, maddening, but to be candid, nobody died. Apple will fix the bug, FaceTime group chat will return, and the next (and last) time we’ll hear much about it is in a 2019 tech news retrospective at the end of the year.

In the grand scheme of possibilities for epic software failures, I’ll take this version any day of the week over some of the others I can imagine.


* We’re just not good at this. It’s why bank robbers get caught; you have a plan, you carefully thought it out end-to-end, and then you miss that one little thing that makes it all unravel. The moral of the story? Don’t rob banks and don’t write software

Photo credit: John Tann