Topics

AI Academy Under Siege

Oren Etzioni analyzes the brain drain of artificial intelligence experts out of academia and suggests some solutions to the problem.

November 20, 2019
 
 
Istockphoto.com/feodora Chiosea

Universities have long been a source of talented leaders for industry, but an accelerating exodus of professors with expertise in artificial intelligence has caused concerns. A recent Bloomberg op-ed asked, “If industry keeps hiring the cutting-edge scholars, who will train the next generation of innovators in artificial intelligence?” and The New York Times has noted similar issues. This article analyzes the problem and suggests solutions.

The brain drain of AI experts out of academia can be explained in simple economic terms. The demand for experts has outpaced supply, leading to sharply increased prices. As a result, industry compensation packages are very generous for top academics. Moreover, supply has proven rather inelastic; it takes years of hard work and exceptional talent to become a leading AI expert. Experts are also attracted to industry by distinct data sets, ample computational resources and the potential to impact millions, or even billions, of people through commercial products.

Academic engagement with industry takes many forms that directly benefit universities. Many AI professors obtain funding for their research from companies such as Google and Microsoft. Others consult for companies a day per week, which helps inform their research and teaching. Both professors and students launch innovative start-ups that build on university research, including Google (Stanford University), Akamai (Massachusetts Institute of Technology), Duolingo (Carnegie Mellon University) and Farecast (University of Washington). Companies also frequently license intellectual property from universities.

Quite a few professors take one- to two-year leaves of absence to work at companies and then return to academia. In 1999, I took such a leave from the University of Washington to join a high-tech company as the chief technology officer. I came back to the university a year later and helped launch and teach a new, popular undergraduate course, Advanced Internet and Web Systems. My understanding of software and real-world applications of research was hugely enriched, and I was able to share that with students through courses, talks and informal conversations.

The challenge to academia comes when professors leave to join companies either en masse, such as when Uber hired 40 people from a Carnegie Mellon robotics lab, or individually, as with prominent AI researchers Daphne Koller, Andrew Moore, Andrew Ng, Fernando Pereira, Sebastian Thrun and others. In the last few years, Facebook has aggressively hired professors, including Yann Lecun from New York University, Jessica Hodgins from Carnegie Mellon University, Jitendra Malik from the University of California, Berkeley, Devi Parikh from Georgia Tech and many others.

Facebook enables the professors to spend typically 20 percent of their time at universities, which facilitates recruiting for Facebook but leaves precious little time for professors to teach courses and engage with students as true mentors rather than “consultants.” That approach also raises substantial conflict-of-interest concerns that must be mitigated, as the professor is often in a position of power vis-à-vis the students yet primarily works for a for-profit corporation.

One approach to address conflicts of interest is what might be called a “separation of church and state” approach, where campus research and off-campus commercial activities are completely distinct projects, perhaps in different technical areas. Google has taken that approach with quite a few faculty members. The separation model is often coupled with a roughly even split of on-campus/off-campus time, which is clearly preferable to losing professors from academia.

Alternatively, university-corporation synergy is possible. In 2001, Intel established a series of research “lablets” affiliated with prominent computer science departments at Berkeley, CMU, UW and the University of Cambridge. The lablets were run by professors on leave and offered research funding and open intellectual property, which enabled synergy between on-campus and off-campus activities. Engineering resources and data sets provided opportunities to enhance university research and create powerful collaborations. Of course, commercial influence on the academic research agenda needed to be monitored and disclosed properly to proactively avoid situations like the one at the Brookings Institute.

Intel closed all of the lablets by 2011 and, instead, chose to fund research directly through grants. However, while in operation, the lablets resulted in strong synergy between Intel research and academia.

At the nonprofit Allen Institute for AI, where I am CEO, several faculty members share time between their home universities and the institute and pursue collaborative research efforts with open IP. Professors, students and institute staff collaborate to publish research papers in top venues, and the data and software are available under open-source licensing. This collaborative model allows professors to continue teaching, mentoring and fulfilling university obligations. The institute also provides substantial research funding, which reduces the time professors must spend chasing after and managing grants.

In the long term, universities can take several steps to help increase the ranks of AI experts. They can:

  • Work to increase the number of people applying to Ph.D. programs by encouraging applications from women, underrepresented groups and foreign nationals. Diverse perspectives are essential to ensuring the field achieves its full potential, and role models are key to attracting such talent. Over all, the cycle needs attention: to address demand from undergraduates, you need more professors. To get more professors, you need more people obtaining their Ph.D.s.
  • Encourage faculty members and students to launch start-ups by removing barriers and increasing flexibility. First, the process of licensing research to start-ups should be streamlined and sped up. Second, clear and effective procedures for managing inevitable conflicts of interest must be put in place. Third, universities will benefit from taking the long-term view and enabling faculty members and students to take the leaves (or partial leaves) necessary to get a start-up off the ground.
  • Make it easier for AI researchers to obtain research funding. The bane of all university scientists’ existence is chasing the funding that supports graduate students, postdoctoral fellows and engineering staff. There should be fewer overhead requirements, such as lengthy proposals, meetings and status reports. Universities can offer staff and software support to minimize the busywork on their end.

Of course, there are still downsides to professors sharing their time with external organizations. But the supply/demand mismatch is so acute that universities must figure out ways to resolve it. And as discussed by the Computing Community Consortium -- which works to bring together representatives from academia, industry and the federal government to engage in needed AI research -- all parties need to collaborate to identify approaches that sustain thriving research and innovation ecosystems in the AI field.

Bio

Oren Etzioni is CEO of the nonprofit Allen Institute for AI and a professor of computer science at the University of Washington.

Read more by

Be the first to know.
Get our free daily newsletter.

 

Back to Top