What if machine learning was less than it seems?

Artificial intelligence (AI) – particularly its most famous iteration, machine learning – has been greeted as the next breakthrough technology for military affairs in the lineage of gunpowder, combustion engine and aircraft. One of the defining characteristics of AI innovation in the current geopolitical competition between the United States and China is that it has occurred primarily in the private sector. As Chinese officials adopt a “military-civilian fusion» politics of centralize Chinese Communist Party’s influence on private sector AI research, US analysts are increasingly calling for a “the whole nation” approach to fundamental technological research. While AI is a strategically important technology, what if the long-term usefulness of machine learning, in particular, were more limited than it appears?

The question may seem odd since AI researchers in the private sector wear their sense of momentum on their sleeves. DeepMind researcher Nando de Freitas, for example, recently proclaimed “the game is over“and that the keys to advancing AI are there for the taking. Breakthroughs in the abilities of AIs to match or exceed human capabilities have sparked increased interest in machine learning. Examples include AlphaGo victories over professional Go player Lee Sedol, GPT-3’s ability to Write articles and coded, Gato’s ability to perform several specialized tasks rather than just one, and AlphaFold’s new breakthrough on all known protein structures. In the area of ​​national security, it seems that not a day goes by that we don’t hear from defense department (DOD) to acquire emerging technologies, including semi-autonomous unmanned systems.

But even in the high-profile field of AI, some prominent members have begun to express doubts about the role of machine learning in the future development of AI. While machine learning, especially deep learning, has been relatively successful in narrow and carefully tailored domains, eminent artificial intelligence researcher Yann LeCun recently published a stand detailing the limits and shortcomings current machine learning techniques. AI expert Gary Marcus was also candid about AI systems’ chronic problems with reliability and novelty, noting that the cost of not changing course “is likely to be a winter of deflated expectations.

Amid the flood of AI-related developments in military technology in the United States, some scholars have begun to tie lofty technological expectations to centuries-old, lofty visions of command and control. But like Ian Reynolds writes, “dreams of technologically enabled meaning-making and quick decisions die hard.” Likewise, U.S. Navy Commander Edgar Jatho and Naval Postgraduate School scholar Joshua A. Kroll underline insufficient attention among national security analysts to the implications of current weaknesses and deficiencies in AI systems and what they mean for future developments in AI-based military technology.

It is not inconceivable that disappointing developments in AI systems in the coming years will take the field into a new “AI Winter“- a step in a historic cycle in which inflated expectations of AI lead to disillusionment with the technology’s potential, followed by a decline in private funding. In this scenario, U.S. efforts to retain its lead in innovation will suffer if they have not taken steps to maintain a sufficient degree of AI research and development independent of the private sector, which relies on the recognition of its current shortcomings.

While it may be objected to, an AI winter is not guaranteed to occur in the first place. However, preparing for one allows the U.S. military to use all the leverage it has on America’s innovation ecosystem to remain the world’s preeminent technological powerhouse – it doesn’t hard to prepare for the worst. Ideally, the DOD and other branches of the US government are using their relationships with private sector AI firms to prevent an AI winter from happening.

How should this be done?

First, minority voices in AI pursuing research agendas outside of machine learning should be promoted by the DOD by granting them public-private partnerships and public acquisitions or contracts. While it’s not a good idea for lesser-known AI techniques to become DOD’s main focus in the short term, DOD can walk and chew gum at the same time. The Ministry of Defense has already been called upon to send clearer signals “to spur private sector investment” in critical technologies like AI. One of those signals should be that while machine learning has broad applicability in the military and will continue to be relevant for defense purposes, research and development of AI in the private sector that considers machine learning as one piece of the AI ​​puzzle will be rewarded. . But how will plausible minority agendas in private sector AI research be distinguished from quieter voices simply exaggerating the potential of their designs?

This brings us to the second recommendation: individuals from diverse intellectual and educational backgrounds should be consulted on the development of measures to testing, evaluation, verification and validation AI systems. Adjusting the DOD’s focus to long-term development of AI-enabled military technology requires understanding which minority voices in private sector AI research present plausible (even if long-term) pathways to the beyond machine learning, linking this recommendation to the first. For all the talk AI systems requiring the right amounts and qualities of training data, AIs and their applications themselves are data that requires interpretation. The content of the article published by GPT-3, the strategies of AlphaGo when reading Go, etc., are all data for humans to interpret. But without the proper lens to visualize that data, an AI system or application can be misinterpreted as more or less advanced or valuable than it is.

Thus, expertise in areas such as cognitive psychologyneuroscience and even anthropologyas well as researchers from interdisciplinary – in addition to those trained primarily in AI research or development – would help military organizations chart independent paths for AI-based military technology that often resists misaligned ends private sector research.

Finally, and obviously, the United States government should increase its investment in basic research “spanning the range of technological disciplines”, as Martijn Rasser and Megan Lamberth recommend. Many have called for such an increase in recent years, and the possibility of such appropriations looms with the U.S. Senate and House of Representatives passing the Flea Law and Science in July this year. This recommendation requires no elaboration other than to note that increased funding for scientific research must be accompanied by grounded ways of interpreting AI developments and refined long-term visions for AI designs. ‘IA.

Vincent J. Carchidi holds a master’s degree in political science from Villanova University. He specializes in the intersection of technology and international affairs, and his work on these topics has appeared in War on the Rocks, AI & Society, and Human Rights Review.

Image: Flickr/US Department of Defense.

Comments are closed.