(We) humans can be really annoying. Especially when it comes to new technologies. We expect AI to perform flawlessly, but we readily accept mistakes and shortcomings from ourselves and other humans. It's a classic case of double standards.
Let's look at a few examples..
Waymo's Self-Driving Cars: Safer than Humans
Waymo, a subsidiary of Google's parent company Alphabet, has been developing self-driving car technology for years. And the results are impressive. According to a report from the Waymo Safety team, their driverless cars were 6.8 times less likely than human drivers to be involved in a crash resulting in an injury. That's an 85% reduction in injury crashes compared to human drivers.
To put that in perspective, it’s universally agreed that drinking and driving is a bad combo. With a blood alcohol level at 0.08, the legal limit, the risk of an accident is 2.69 times higher than at a BAC of 0.00. That is to say, Waymo is to a human driver like a sober driver is to a drunk driver. Or to put it differently, rather than accept self-driving vehicles, we prefer to let all of the humans effectively drive drunk.
Google's AI Search: Smarter than News Outlets?
Another example is Google's use of AI to generate search result summaries. Thanks to recent LLM advancements, Google can provide concise, high-quality summaries for many search queries, saving users the time and effort of sifting through multiple web pages.
However, instead of acknowledging the impressive capabilities of Google's AI, critics tend to focus on the instances where it gets something wrong or provides a suboptimal result. I get it - it’s fun to point and laugh, especially when told a rock a day might keep the doctor away. Yet instead of looking at the totality of Google’s scale and how often this might be useful, they treat these relatively rare occurrences as if they completely invalidate the technology, conveniently ignoring the countless times it outperforms humans. And conveniently, those most eager to point this out are precisely those most at risk of losing ad-revenue or otherwise becoming less relevant if summaries are good enough to prevent links out to their pages (the move toward trusted voices via subscription lessens this - but still gets folks angsty).
It's a clear double standard. We're willing to overlook human errors and biases, but we hold AI to an impossibly high standard of perfection. This dichotomy is both puzzling and frustrating, especially when you consider the remarkable achievements of modern AI systems.
Let's take a step back and examine some of the incredible feats AI has accomplished in recent years. Language models like GPT-4 can generate human-like text on virtually any topic, from creative writing to technical documentation. Image generators like Midjourney can produce stunning visuals from simple text prompts. And let's not forget about AlphaFold, the AI system that predicts the structure and interactions of all of life’s molecules.
These are not trivial accomplishments. They represent major breakthroughs that have the potential to revolutionize entire industries and advance scientific research in profound ways. Yet, we often dismiss or downplay these milestones because we fixate on the occasional quirks or imperfections of AI.
The double standard
Imagine if we held human creations and endeavors to the same unrealistic standards. Would we disregard the entire field of architecture because sometimes buildings collapse? Would we reject all medical treatments because they don't have a 100% success rate? Of course not. We understand that perfection is an unrealistic and often unnecessary benchmark.
Why, then, can't we extend the same understanding to AI? Like any technology developed by imperfect humans, AI will inevitably have its flaws and limitations. But that doesn't negate its incredible potential or the remarkable progress it has already made.
I suspect some of our double standard stems from a deep-rooted fear of being surpassed or made obsolete. At least that’s what we see in movies. Or to bring it to reality, we can simply look at our well-documented history of resisting technological change. Calculators brought fears of an inability to perform basic math. Fast forward to today and we all have supercomputers in our pockets.
To be clear, this isn’t a techno-optimist view. While our fears about new technologies have, in the long run, been consistently proven to be unfounded, there is real risk for folks in the short term. These concerns are not merely hypothetical. The displacement of jobs due to automation, the potential for widening socioeconomic gaps as AI benefits accrue disproportionately, and the ethical dilemmas surrounding autonomous decision-making systems are all pressing issues demanding our attention:
Bias and Discrimination: AI systems are not immune to bias. Trained on data that reflects existing societal biases, these systems can perpetuate or even amplify discrimination.
Information and Disinformation: The power of AI to generate convincing text, images, and videos raises alarming questions about the spread of disinformation - especially ripe for sowing societal distrust in political structures or exploiting and violating individuals, especially young people.
Dependency and Autonomy: As we become more reliant on AI systems for tasks ranging from navigation to healthcare, the question of human autonomy comes to the forefront. How do we ensure that individuals retain agency and critical thinking skills (and not just possess them but use them) in a world increasingly shaped by AI-driven recommendations and decisions? Folks get lazy - they fall asleep at the wheel, and are likely okay with that.
So real issues we could be talking about. But instead we focus on glue pizza.
Don’t be a jerk
If an AI system outperforms humans in a specific task or domain, let's celebrate that accomplishment rather than nitpicking its occasional missteps. Conversely, if an AI system falls short or introduces new risks, let's address those shortcomings rationally and proportionately, without treating them as inherent flaws that invalidate the entire technology.