Technology's progress hasn't seen anything quite like today's rise of generative artificial intelligence, except for maybe that of the world wide web.
Both the internet and AI have catapulted us from the realm of bytes and bits into an age where our lives are increasingly mediated by screens and algorithms, and where robots help us get stuff done.
Table of Contents
So with the above in mind, this article will cover:
- skeptical voices from the past around the internet's ability to improve society;
- thinkers who caution us today against adopting widespread use of generative artificial intelligence (AI);
- how I try to find a balance between the pessimists and optimists.
Us humans have moved along the bell curve of digital adoption at breakneck speed, with innovators and early users paving the way for the rest to follow suit.
But as we enter a future molded by new tech, it's natural to pause and reflect on the downsides, too. The status quo is shifting in ways never before seen. How could the novelty of making such digital headway not produce its skeptics?
Whether it's the internet, which promised to democratize information but also brought about new avenues for misinformation and surveillance, or AI, which offers unparalleled efficiency but raises a ton of ethical quandaries, there's often a tendency for pessimism. Hyperbolic viewpoints come to the surface. Valid concerns and criticisms are mixed in.
But how is one to sift through it all?
Rather than thinking of them as Luddites who oppose any acceptance of a new digital frontier, I say we should still pay the naysayers some mind.
Their insights serve as important checks and balances, but at the same time it's easy to get caught up in anti-tech hype. All in all, ignoring an inevitable future leaves us with less time to prepare for what's coming.
So let's get into it.
Do you need a marketing tech ally for service far beyond just website creation?
Schedule a meeting with our Founder and Managing Director, Nichole, to talk about Mythic as your digital sidekick.
Early internet skeptics
Wisdom to wield responsibly?
Ever read Bill Joy's often-cited article, "Why the Future Doesn't Need Us," from Wired magazine in 2000?
Joy, a co-founder of Sun Microsystems, rang alarm bells about the potential existential risks of not just the internet, but also of robotics, genetic engineering, and nanotechnology. He cautioned that these rapidly advancing fields could result in technologies that are self-replicating and beyond human control.
In a sobering prediction, Joy raised the specter of a future where humans may become an "endangered species," outmaneuvered and possibly extinguished by the very technologies they birthed.
Some might argue that the jury is still out on nanotech, gene splicing and robots. I think it's safe to say that these have been with us for a while now, and of course we're still here.
Nonetheless, Joy's concerns were especially impactful because of his insider status in the tech world. Unlike outsiders critiquing the industry, he was a leading technologist articulating worries about the path he himself had helped pave. He pointed out the risks of technologies that could be weaponized, either deliberately or accidentally, with catastrophic consequences. Just a few examples include self-replicating nanobots that could consume the Earth's biomass, or genetically-engineered pathogens with the potential for massive loss of life.
Two decades later, the existential risks Joy warned about have echoes in the calls for caution around generative artificial intelligence. While we have yet to see the full manifestation of his direst predictions, Bill Joy's warnings still help us in imagining a framework for ethical considerations in technology development.
Scientists and policymakers refer back to Joy's writings when discussing the need for safeguards, oversight, and responsible development in technology. Even 20 years after its publication, his article reminds us that we must be acutely aware of the dual-use nature of technology: it has the power to both uplift and devastate, and the wisdom to wield it responsibly is perhaps our most critical challenge.
"Silicon Snake Oil"
In the mid-1990s, when the internet was still in its infancy and the world was buzzing with the promise of a digital utopia, Clifford Stoll stood out as a contrarian voice with his book "Silicon Snake Oil: Second Thoughts on the Information Highway."
Stoll, an astronomer and computer expert, was suspicious of the overwhelming optimism surrounding the internet. He warned that it would not be the great equalizer or democratizing force many believed it would be. Stoll's work questioned the depth and quality of online interactions, cautioned against the loss of privacy, and emphasized the value of real-world experiences over virtual ones.
More than two decades later, some of his warnings have proven to be, well, right on point.
While the net has certainly changed many aspects of our lives for the better, it's also brought about a mess of issues that we're still grappling with today. Online discourse often lacks nuance and the erosion of privacy is a concern now more than ever, just to name two quick examples.
Additionally, the promise that the internet would democratize information and level the playing field has been complicated (to say the least) by issues like the digital divide and the concentration of power in the hands of a few tech giants.
Of course not all of his prognostications came true, in particular some chuckle-worthy ones from a Newsweek article from 1995. But all 20/20-hindsight-fun aside, Stoll's cautious approach is a useful lens through which we can assess the impact of new tech, like generative artificial intelligence, on our lives today.
Morozov's conclusion; net delusion
In 2011, Evgeny Morozov published "The Net Delusion: The Dark Side of Internet Freedom." It warns us that the web's promise of democratization and freedom is not only overhyped but can actually be counterproductive. A researcher and writer on technology and politics, Morozo went after what he calls "cyber-utopianism," the belief that the Internet will inevitably lead to more democratic, free societies.
He argued that authoritarian regimes can and will use the net surveillance, propaganda, and control, sometimes more effectively than democratic governments can use it to promote freedom.
One of Morozov's major beefs is with the "clicktivism" phenomenon. This is the idea that online activism (liking, sharing, tweeting, etc.) can replace real-world action in terms of organizing for positive social change. Superficial engagement can give the illusion of impactful action while achieving little; a digital mirage of activism that detracts from genuine change.
Morozov's skepticism and warnings are still relevant today. The internet, far from being the pure force towards good for which many had hoped, has instead also become a battleground of misinformation, polarization, trolling and other odious behavior.
Authoritarian governments have indeed become really adept at using the web for surveillance and social control. Today's concerns about data privacy, fake news, and election interference all echo Morozov's early caution against taking an overly optimistic view of what the world wide web can achieve politically.
But of course, worldwide internet adoption has also afforded abilities for individuals and groups to organize, inform, and stay connected in ways that would be impossible without it.
The point here is that technologies are not inherently good or bad; it's how they are used and controlled that determines their impact.
Generative AI skeptics
The early years of the internet's climb to mass adoption came with a lot of utopian ideals, but some recent polling about AI shows that today's general public is not so optimistic. And it's not just regular folks who are doubtful. Industry leaders have signed open letters sounding the alarm, and there's certainly no lack of "are we doomed by AI?!" media content out there. Let's take a look at a few prominent voices.
Account for risks
Gary Marcus, a well-established figure in the AI research community, gave a TED talk where he raises the alarm about the urgent risks associated with runaway AI. He urges us to consider the worst: that the rapid tech adoption of AI systems poses a threat to democracy; even humanity at large. His research identifies several key areas of concern, including the potential for AI to:
- generate misinformation,
- perpetuate harmful biases, and even
- design chemical weapons.
These issues are not just theoretical; they have real-world implications that could undermine the very fabric of society. Marcus argues that the technology creation lifecycle for AI is dangerously accelerated, often bypassing critical safety and ethical considerations.
Thus his call for a global, non-profit organization to regulate AI and ensure its safe deployment. This entity would be responsible for developing safety protocols, akin to phase trials in pharmaceuticals, to ensure that AI technologies are rigorously tested before they are deployed on a large scale. The organization would also conduct research to measure the extent and growth of misinformation, providing data-driven insights to guide policy decisions.
Marcus emphasizes that we need to take risks into account before we adopt new technologies, especially those that have the potential to reshape aspects of a new societal landscape.
Godfather of AI, Geoffrey Hinton
Geoffrey Hinton, often referred to as the "Godfather of AI," has recently become a prominent voice in the skeptic community around artificial intelligence. While he acknowledges the transformative benefits it can bring to fields like healthcare, transportation, and even climate science, he also issues stark warnings about the darker aspects of AI.
One of Hinton's most extreme claims is the possibility of AI systems becoming self-aware and potentially surpassing human intelligence. He suggests that these systems could eventually write their own code, modify themselves, and escape human control. It's a chilling prediction, one that raises existential questions about the role of humanity in a world where machines could make decisions without human intervention.
Like others, "the Godfather" tells us that future systems could be so advanced that they could manipulate public opinion, control markets, or even influence political outcomes; echoing some of the claims made by early internet skeptics.
Similar to Gary Marcus, Geoffrey Hinton has his own urgent suggestions for action. To address these risks, he calls for rigorous scientific inquiry and careful public policy considerations. He admits that there's no clear path to guaranteeing the safety of AI but emphasizes the need for ongoing research and dialogue among scientists, policymakers, and the general public.
If you've been keeping tabs on this topic, you're likely to have come across the Center for Humane Technology. Check out their content, and you'll see that they believe the frenzied dash to adopt new AI technologies is risky, while also severely lacking in adequate safety measures.
One of the more alarming ideas posited by Aza Raskin and Tristan Harris, the Center's co-founders, is the concept of "reality collapse," where AI becomes so advanced that it can generate convincing fake realities. This could range from deepfakes that are indistinguishable from real videos to AI-generated news that could sway public opinion. The implications for this are, of course, staggering.
They also discuss the potential for AI to exploit legal and ethical loopholes automatically. Imagine AI systems that can effectively navigate around regulations or find ways to exploit human psychology for profit 😳.
Are these dystopian fantasies; real possibilities that could emerge? Only time will tell.
To mitigate these risks, Harris and Raskin call for a multi-disciplinary approach that involves all kinds of stakeholders. They believe that tackling the AI dilemma requires a collective effort, one that goes beyond technological solutions.
And this is where all the skeptics and supporters of generative AI seem to agree: the concept of creating a framework that considers the societal and ethical ramifications of AI, ensuring that its deployment is aligned with human values and well-being.
By examining the above perspectives it's easy to see how AI presents both enormous opportunities and risks, requiring thoughtful management and regulation.
But how can we strike a balance? Untempered optimism around such disruptive technologies is shortsighted (at best), but at the same time many expert naysayers have been proven wrong over time.
Navigating the complex landscape of artificial intelligence (AI) requires a nuanced approach that takes into account both the optimistic promises and the cautionary, often scary predictions. While the skeptics in the field offer important wake up calls as to the potential risks and ethical dilemmas, it's also worth remembering that skepticism has its own pitfalls.
The early days of the internet were rife with dire predictions that, while interesting, didn't always pan out as expected. Even well-respected experts can miss the mark when it comes to forecasting the future impact of disruptive technologies.
The technology adoption lifecycle for AI is a double-edged sword. On one side, there's this transformative potential that could revolutionize everything from healthcare to transportation. On the other, there are valid concerns about ethical implications, societal impact, and the need for regulatory oversight.
Striking the right balance means not only listening to the pessimistic voices, but also leaving room for the possibility that the future might not be as bleak as some predict.
I think it's wise to approach the adoption of AI with a nuanced perspective, one that's open to its potential benefits while also being cautious of its limitations and risks ⚖️.
This doesn't mean ignoring the skeptics; rather, it's about integrating their valuable insights into a broader framework that also considers the transformative potential of AI.
In the end, the goal should be to adopt a multi-faceted approach to AI, one that involves technological innovation, ethical considerations, and regulatory frameworks.
By doing so, we can aim for a future where AI is developed and deployed in a way that aligns with both technological progress and societal well-being. This even-handed approach allows us to move forward with appropriate caution, consideration, and speed but also with a sense of optimism for the opportunities that AI presents; which to be sure, are many indeed.