Take a moment to think about what the world looked like exactly a century ago. By 1920, the disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. For most people, life largely went on as it always had, with little to indicate that much was amiss.
Over the next decade, however, that would change. As ecosystems formed around the new technologies, productivity soared and living standards dramatically improved. However, the news wasn’t all good. While technology did much to improve people’s lives, it also facilitated war and genocide on an unprecedented scale.
Today, we are likely at a similar point. Nascent technologies have the potential to create a new era of productivity, but also horrific destruction. Too often, we forget that technology should serve humans and not the other way around. Make no mistake, This is not a problem we can innovate our way out of. Technology will not save us. We need to make better choices.
The End Of Moore’s Law And A Shift To A New Era Of Innovation
Over the past several decades, innovation has become almost synonymous with digital technology. As we learned to cram more and more transistors onto a silicon wafer, value shifted to things like design and user experience. The speed of business increased and agility became a primary competitive attribute. Strategy and planning gave way to experimentation and iteration.
The success of venture-backed entrepreneurs led to arrogance and eventually the myth that Silicon Valley had somehow hit on a model that could be applied to any problem in any industry or context. With valuations of tech companies exploding, a new sense of technological libertarianism began to emerge in which many began to value algorithms over human judgment.
Yet today, that narrative is beginning to unravel for two reasons. First, our ability to cram more transistors onto a silicon wafer, commonly known as Moore’s Law, is ending. Second, we’re beginning to realize that technology has a dark side. For example, artificial intelligence is vulnerable to bias and social media can have negative psychological effects.
On July 16th, 1945, the world’s first nuclear explosion shook the plains of New Mexico. J. Robert Oppenheimer, who led the scientific team that developed the atomic bomb, chose the occasion to quote from the Bhagavad Gita. “Now I am become Death, the destroyer of worlds,” he said. It was clear that we had crossed a moral Rubicon.
Many of the scientists of Oppenheimer’s day became activists, preparing a manifesto that highlighted the dangers of nuclear weapons, which helped lead to the Partial Test Ban Treaty. The digital era, on the other hand, has seen little of the same reverence for the power and dangers of technology. In fact, for the most part, Silicon Valley’s engineering culture has eschewed moral judgments about its inventions.
Today, however, as our technology becomes almost unimaginably powerful, we increasingly need to confront significant ethical dilemmas. For example, artificial intelligence raises a number of questions, ranging from dilemmas about who is accountable for the decisions a machine makes to how we should decide what and how a machine learns.
Or consider CRISPR, the gene editing technology that is revolutionizing life sciences and has the potential to cure terrible diseases such as cancer and Multiple Sclerosis. We already have seen the problems hackers can create with computer viruses; how would we deal with hackers creating new biological viruses?
It seems fitting that the fall of the Berlin Wall happened during the same month, November 1989, that Tim Berners-Lee created the World Wide Web. What followed was a time of great optimism in which both information and people enjoyed unprecedented freedom. The twin powers of technology and globalization seemed unstoppable.
Across the world, free-market technocrats pushed a brand of market fundamentalism known as the Washington Consensus. To receive loans, developing nations were made to accept harsh economic measures that would never have been accepted in western industrialized nations. Within developed countries, the interests of labor lost ground to those of corporations.
These policies led to genuine achievements. Hundreds of millions were lifted out of poverty. Free trade and free travel increased. Technology enabled even a relatively poor kid in a poor country, armed with an Internet connection, to be able to access the same information as a wealthy scion studying at an Ivy League university.
Perhaps not surprisingly, we’ve seen a global rise in populist authoritarian movements that have shifted governance dramatically against the type of open policies that fueled globalization and technological advancement in the first place. The pendulum has swung too far. We need to refocus our energy from technology and markets back to the humans they are supposed to serve.
We Are The Choices We Make
While the problems we have today can seem unprecedented and overwhelming, we’ve been here before. After World War II, the world teetered between liberal democracy and authoritarianism. New technologies, such as nuclear power, antibiotics, and computers represented unprecedented possibilities and challenges.
Yet in the wake of destruction, an entirely new international system was created. The United Nations provided a forum to resolve problems peacefully. Bretton Woods stabilized the global financial system. The creation of the welfare state helped mitigate the harsher effects of the market economy and stronger protections for labor helped build a vibrant middle class. Arms agreements reduced the risk of armageddon.
Today, we are at a similar crossroads. We are present at the creation of a new technological era in the midst of a pivotal political moment. The choices we make over the next decade will have repercussions that will reverberate throughout the new century. Will we serve our technologies or will they serve us? Will we create a new global middle class or pledge fealty to a global elite?
One thing is clear: These choices are ours to make. Technology will not save us. Markets will not save us. We can, as we did in the 1920s and 30s, choose to ignore the challenges before us or, as we did in the 1940s and 50s, choose to build institutions that can help us overcome them and build a new era of peace and prosperity. The ball is in our court.
Top articles, research, podcasts, webinars and more delivered to you monthly.
If you’ve ever been on call, you’ve probably experienced the pain of being woken up at 4 a.m., unactionable alerts, alerts going to the wrong team, and other unfortunate events. But, there’s an aspect of being on call that is less talked about, but even more ubiquitous – the cognitive load. “Cognitive load” has perhaps
Knowing your customer in the digital age Want to know more about your customers? About their demographics, personal choices, and preferable buying journey? Who do you think is the best source for such insights? You’re right. The customer. But, in a fast-paced world, it is almost impossible to extract all relevant information about a customer
Cloud computing is the anytime, anywhere delivery of IT services like compute, storage, networking, and application software over the internet to end-users. The underlying physical resources, as well as processes, are masked to the end-user, who accesses only the files and apps they want. Companies (usually) pay for only the cloud computing services they use,
Incubated in Harvard Innovation Lab, Experfy specializes in pipelining and deploying the world's best AI and engineering talent at breakneck speed, with exceptional focus on quality and compliance. Enterprises and governments also leverage our award-winning SaaS platform to build their own customized future of work solutions such as talent clouds.