logo

Projects

Explore our innovative school bus tracking system, designed to ensure student safety and provide real-time location updates for school administrators and parents. Additionally, discover our cutting-edge digital marketing solution that leverages social media to help businesses expand their reach and engage more effectively with their target audience.
info@vace.io
+84 34 444 0110

From AI Hype to AI Proof

The military’s next steps may advance applications of artificial intelligence.

Winner of The Cyber Edge 2021 Writing Contest

Convincing senior defense decision makers to significantly invest in artificial intelligence capabilities that would add more value to the United States’ already digitized operational capabilities—particularly in the cyber domain—needs more than pronouncements that “AI can save the taxpayers money.” It requires a logical progression of defining the objective, identifying the need, demonstrating specific results, conducting comprehensive cost analysis and, particularly in the case of applications in the cyber domain, thoughtfully discussing resilience and deception.

Artificial intelligence promises to be an important predictive tool in the decision-making process necessary for effective cyber warfighting. Unfortunately, the overheated hype surrounding almost every discussion of the military applications of AI has now become a detriment to that future. Too much is being promised; too little is being explained in practical terms. That combination leads to disappointment, and in the realm of defense programming and budgeting, it is a recipe for canceled programs. Abstract concepts and science fiction writing contests will not advance the dialogue.

Specific results need to be demonstrated soon. Thus far, the U.S. Defense Department has adopted a holistic approach to AI; or as a data analytics company CEO stated, “For the first time, the department is looking holistically at the use of data across operational battlefield applications of data … as well as senior leader decision support and business analytics.”

This approach is codified in the brief Defense Department Data Strategy released in October 2020. However, this is the wrong approach if the department hopes to persuade Congress to provide more funding for AI or for the service chiefs to program more of their existing resources for it. Rather, critical needs must be targeted and resources need to be focused on tasks that are not being adequately performed. As the Naval War College professor Nina Kollars writes, “AI is not a condiment. You cannot just sprinkle it on top and make everything better.”

Although Project Maven resulted in controversy, then-Deputy Secretary Robert Work made a wise choice to discuss the overall unclassified objective of the program—develop algorithms to search countless hours of unanalyzed unmanned aerial vehicle footage for possible counter-terrorist targets. This is a task decision makers and Congress can understand and support—an AI application for a task humans could not do previously at a speed that would provide actionable intelligence.

In contrast, a holistic approach that tries to tempt program managers to sprinkle a little AI on existing programs because data can be correlated a bit better than with existing software is simply not persuasive.

The argument for the holistic approach is that the Defense Department is awash with data that it does not utilize. That is already known. What is more, much of that data cannot be cross-correlated because it does not relate to the same mission or task. Data collected in administrative processes are of a different character than what is needed for operational purposes. To gain traction in the use of AI requires success in a specific program for a discrete operational task. Only then will other program managers become interested in AI. Specific tops holistic.

The Defense Department’s Joint Artificial Intelligence Center and military service AI managers have pointed to the potential use of artificial intelligence in determining planned maintenance—a worthy target. However, software for analyzing real-time machinery conditions to program or reprogram planned maintenance has been a standard tool for more than 40 years.

For example, the U.S. Navy began its predictive maintenance vibration analysis program for surface ship propulsion machinery in the late 1970s. Aggregating data for analysis was then time-consuming but not particularly difficult to digitize. Software programming—the term algorithm was rarely used then—was sufficient to attempt to predict time before potential machinery casualties and to adjust planned maintenance accordingly.

The question that must be asked is: In such a situation, what effect would AI have created that less costly and existing software and human analysis could not provide? To automatically assume that AI is better or a less costly method is another recipe for disappointment.

Convincing decision makers to invest more than token research and development resources requires the AI community to prove that the capability can perform tasks not done before and/or provide answers more useful than those provided with existing methods. It is not enough to simply point to increased speed or personnel reductions. The increased speed and personnel reductions require upfront costs in data labeling and aggregation that may not be as valuable to a nonprofit-generating organization such as the Defense Department as they would be to a commercial corporation, particularly an organization with a tremendous turnover rate in decision makers.

The acknowledgment that data costs money leads to a more persuasive dialogue. AI is nothing without the data to fuel it. It is necessary to assess how sensors will or will not keep up with AI. It is also essential to identify the total cost of the sensor-AI package. But that means data is the most critical component. In the case of a major AI program that senior decision makers have frequently discussed, such as Project Maven, the algorithms developed would have no use if there were not sufficient drones to obtain the video footage the AI programming was designed to analyze.

Yet, most exhortations for the department to spend money on AI algorithms rarely mention the need to develop the more capable sensors required to justify the expense of algorithm development. In the case of Project Maven, for example, the drone video footage data already existed as a byproduct of flights, and the challenge was to speed up its analysis. However, to make AI useful for many of the tasks proponents envision will require collection of—and in some cases generation of—data that does not already exist.

In these situations, the sensors that provide the data, not the algorithms, will be the most expensive element in predictive AI. Applying AI to military operations requires the development of a sensor-to-algorithm system package.

It is important for those responsible for managing military AI to emphasize the sensor-to-algorithm linkage and total cost package. One popular impression is that data is free—a perception generated by commercial AI’s success in Internet marketing where the troves of big data are largely free digital exhaust produced by the consumers’ use of search engines.

Comparable “free data” does not exist for military application. Additionally, the cost of labeling data that may already exist so that it is suitable to be algorithmically processed needs to be identified. Otherwise, defense decision makers and Congress will suffer from sticker shock that will put a brake on the adoption of AI for operational military application.

Commercial AI is not designed to deal with deceptive data, but as Sun Tzu warned, “All war is deception.” One element of AI that is particularly important to its potential use in cyber warfighting is how the capability can deal with deceptive data. This is a fact that must be acknowledged. Yet, Defense Department officials continue to assume—at least publicly—that commercial AI can be easily adapted to operational military applications. This cannot be done with algorithms that assume data from the customer or supplier is accurate. In the operational military environment, the ultimate customer is the enemy and—as is so evident in the cyber domain—a sophisticated enemy does not provide accurate data.

Recently, the Intelligence Advanced Research Programs Activity—the intelligence community’s counterpart to the Defense Advanced Research Projects Agency—has initiated research in the relationship between deception and AI. The topic of deception is now tentatively starting to enter technical discussions within the Defense Department itself.

Thus far, however, deception is not a word routinely linked with the debate over AI. Until it is, the move to further AI applications in the department will end in disappointment—or worse. AI will be seen as a useful tool for maintenance and administrative functions—assuming these uses can be demonstrated to generate new or different results. But it will be seen as a tool only moderately or perhaps not at all useful for operational military applications.

Many commentators have posited that AI and operating in cyberspace are closely entwined. This is evident in how Internet application companies such as Google use big data and AI. However, as a decision-making aid for cyber warfighting, AI faces hurdles—some of them generated by the Defense Department’s current holistic approach to AI applications—that largely relate to the adaptation of commercial AI. The holistic approach will not gain congressional resource support over the long term.

Moreover, cyber warfighting occurs in the realm of deception. Commercial AI is not designed to handle deceptive or false big data. Thus far, it has not been a part of the AI paradigm. The iGo master facing DeepMind did not suddenly pull out a checkerboard and start playing checkers. Most people do not try to fool eBay into sending them recommendations for things they don’t want. But those are good metaphors for what happens in cyber warfighting.

From the unclassified viewpoint, convincing the Defense Department and Congress to put the resources needed into effectively applying AI to cyber warfighting requires a dialogue that includes multiple elements. Companies interested in persuading the Defense Department to examine their AI products and capabilities would be wise to adopt them.

George Galdorisi is director of Strategic Assessments and Technical Futures for the Naval Information Warfare Command Pacific. Prior to joining NIWC Pacific, he completed a 30-year career as a naval aviator, culminating in 14 years of consecutive experience as executive officer, commanding officer, commodore and chief of staff. In his spare time he enjoys writing, especially speculative fiction about the future of warfare. He is the author of 15 books, including four consecutive New York Times bestsellers. He has written extensively on big data, artificial intelligence and machine learning and is an invited speaker at military-industry conferences. He is co-editor, with Dr. Sam J. Tangredi of AI At War: How Big Data, Artificial Intelligence and Machine Learning Are Changing Naval Warfare. Contact him at www.georgegaldorisi.com/

Capt. Sam J. Tangredi, USN (Ret.) holds the Leidos Chair of Future Warfare Studies and is professor of National, Naval and Maritime Strategy in the Center for Naval Warfare Studies of the U.S. Naval War College. A retired U.S. Navy captain and surface warfare officer, he served in numerous warships and commanded USS Harpers Ferry (LSD-49). Among other assignments, he was head of the Strategy and Concepts Branch of the Navy Staff, director of the Strategic Planning for the Navy International Programs Office and U.S. defense attaché to the Hellenic Republic of Greece. He has published six books, over 150 journal articles and numerous analytical reports for a wide range of government agencies and academic organizations. Contact him at sam.tangredi@usnwc.edu or www.samjtangredi.com

Source: https://www.afcea.org/content/ai-hype-ai-proof

admin
No Comments

Post a Comment