Twenty years ago, the people interested in artificial intelligence research were mostly confined in universities and non-profit AI labs. AI research projects were mostly long-term engagements that spanned across several years—or even decades— and the goal was to serve science and expand human knowledge.
But in the past decade, thanks to advances in deep learning and artificial neural networks, the AI industry has undergone a dramatic change. Today, AI has found its way into many practical applications. Scientists, tech executives and world leaders have all touted AI in general and machine learning in particular as one of the most influential technologies of the next decade.
The potential (and hype) surrounding AI has drawn the interest of commercial entities, nation states and the military, all of which want to leverage the technology to maintain the edge over their competitors.
The multi-faceted AI arms race has increased demand for AI talent. There is now a major shortage of people who have the skills and knowledge to carry out major AI research projects across different industries. Under such circumstances, those who have deeper pockets have managed to hire AI scientists for their projects.
This has led to an AI brain drain, drawing scientists and researchers away from the institutions where artificial intelligence was born and developed into the revolutionary technology it has become.
How deep learning ended the AI winter
Before the deep learning revolution, AI was mostly dominated by rule-based programs, where engineers and developers manually encoded knowledge and operation logic into their software. During those years, artificial intelligence had been widely known for overpromising and underdelivering, and had undergone several “AI winters” after failing to meet expectations.
Around the turn of the decade, scientists managed to use neural networks to perform computer vision and natural language processing (NLP), two areas where rule-based performed very poorly.
The turn of events enabled AI to enter numerous fields that were previously considered off limits or extremely challenging for computers. Some of these areas included voice and face recognition, object detection and classification, machine translation, question-answering and more.
This paved the way for many new commercial uses of AI. Many of the applications we use every day, such as smart speakers, voice-powered digital assistants, translation apps and phone face locks, are all powered by deep learning algorithms and neural networks. The revival of neural networks also created new inroads in other areas such as autonomous driving, where computer vision plays a key role in helping self-driving cars make sense of their surroundings.
The possibilities offered by deep learning drew interest from large tech companies such as Google, Facebook and Amazon. Deep learning became a way for these companies to offer new and better services to their customers and gain the edge over their competitors.
The renewed interest in neural networks triggered the race to poach AI scientists from academic institutes. And thus began the AI brain drain.
How AI scientists became MVPs
While handsome salaries play a large role in drawing AI professors and researchers away from universities and to tech companies, they’re not the only factor contributing to the AI brain drain. Scientists also face a cost problem when working on AI research projects.
Some areas of AI research require access to huge amounts of data and compute resources. This is especially true of reinforcement learning, a technique in which AI agents develop their behavior through massive trial-and-error such as playing hide-and-seek 500 million times or 45,000 years’ worth of Dota 2, all in super-fast forward. Reinforcement learning is a hot area of AI research, especially in robotics, game bots, resource management and recommendation systems.
The computation costs of training reinforcement learning AI models can easily reach millions of dollars, the kind of money that only rich tech companies can spare. Moreover, other kinds of deep learning models often require access to large sets of training data that only large tech companies like Google and Facebook possess.
This also makes it very hard for AI researchers to pursue their dreams and projects without the support and financial backing of big tech. And big tech’s support seldom comes for free.
What are the effects of the AI brain drain?
With more and more professors, scientists and researchers flocking to the commercial sector, the AI industry will face several challenges. First, universities will have a hard time hiring and keeping professors to train the next generation of AI scientists.
This will in turn further widen the AI skills gap. Consequently, the wages of AI researchers will remain high. This might be pleasant for the researchers themselves, but not so for smaller companies who will struggle to hire AI talent for their projects.
The commercialization of artificial intelligence will also affect the kind of advances the field will see in the next years. The interest of the commercial sector in AI is primarily to develop products that have business value. They’re less interested in projects that serve science and the welfare of humanity in general.
One notable example is DeepMind, one of the handful of research labs that aims to create human-level AI. Since acquiring DeepMind, Google has given the research lab access to its unlimited compute, data and financial resources. But it has also restructured the AI lab to create a unit that creates commercial products. DeepMind is now in the midst of an identity crisis and has to decide whether it’s a scientific research lab or an extension of its for-profit owner.
Finally, the AI brain drain and the commercialization of artificial intelligence will mean less transparency in the industry. For-profit organizations seldom make their source code and AI algorithms available to the public. They tend to treat them as intellectual property and guard them closely behind their walled gardens.
This will result in a slower AI research and a lot of “reinventing the wheel” as companies will share less knowledge to keep their edge over their competitors.