It is difficult to find a popular or scholarly publication today that has not recently hosted a debate about the promise of artificial intelligence (AI). For many AI is a problemsolving panacea; for just as many others it is a field more than worthy of a skeptical eye. Where is the balance? What perspective should submariners adopt?
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
AI is an interdisciplinary sub-field of computer science that seeks to recreate in computer software the processes by which humans solve problems. AI “knowledge engineers” would extract expertise from professionals like submariners and then structure it in ways that permit relatively flexible problem-solving in a specific area, such as for the submarine planning and decision-making, or signal data analysis.
AI systems differ from conventional computerbased systems in a number of important ways. First, conventional systems store and manipulate ~ within some very specific processing boundaries. AI systems store and apply knowledge to a variety of unspecified problems within a selected problem’s domain. Conventional systems are often passive, where AI systems actively interact with and adapt to their users. Conventional systems cannot inter beyond certain pre-programmed limits, but AI systems can make inferences, implement rules of thumb, and solve problems in much the same way we routinely decide whether or not to buy a Ford or a Chevy, or accept a new professional challenge — though AI systems cannot usually infer beyond a set of preprogrammed events and conditions.
The representation of knowledge is the mainstay of AI research and development, and is the reason why so many otherwise staid managers and scientists are so enamored with AI. If knowledge and expertise can be captured in computer software and applied at a moment’s notice, then major breakthroughs may be possible in the production and distribution of knowledge. If it is possible to capture the best naval tacticians in flexible and friendly computer-based systems, then productivity and efficiency might explode across a “domain.”
Perhaps surprisingly, AI is a very young field of inquiry. Twenty years ago very few would admit to a commitment to AI research, but largely through the efforts of a few farsighted individuals the field began to grow by the early 1970s. Today it is difficult to find designers of interactive computer-based systems who have not given serious thought to the promise of artificial intelligence.
AI systems designers use a set of unique tools to represent knowledge and build intelligent problem-solving systems. Imagine for a moment the detailed subjects that appear in the many Navy tactical manuals. Then imagine a computer program -.. not at all unlike the ones resident in human brains — capable of searching through the sources for information in order to solve a specific problem. AI software languages permit information to be structured as knowledge and permit system users to apply the knowledge to a variety of problem solving tasks.
Today’s AI tools and techniques permit programmers to develop search capabilities through networks of facts and relationships whjch in turn, permit users of AI systems to solve analytical problems. Special purpose software languages permit AI systems designers to represent knowledge in several ways, including frames, scripts, semantic nets, and rules (Andriola, 1985). Perhaps the most widely utilized knowledgerepresentation technique involves the development of cognitive rules of thumb usually expressed in “if . . . form.” Imagine, for example, rules regarding the placement of sonobuoys for ASW that might calculate currents, ranges, capabilities and a variety of other aspects that comprise optimal placement tactics all programmed within an expert system capable of generating advice about where and when to drop sonobuoys. In fact, such a system exists today.
Many other systems use rules to make inferences about what is happening and what to do about it.
As you have no doubt already surmised, the key to the power of all rule-based AI systems lies in the accuracy and depth of their rules. Bad rules (or doctrine) produce bad conclusions, just as bad human probability estimates frequently result in strategic and tactical surprises. It is the job of the “knowledge engineer” to make sure that the rules (or networks) in a system represent substantive expertise to the fullest extent possible. This requirement, in turn, means that doctrine-based systems can never stop developing. In order for them to keep pace with the field they are trying to capture electronically, they must routinely be fed new doctrines.
One of the earliest AI research goals was to develop computer-based systems that could understand free-form language. The “natural language processing” branch of AI represents knowledge by endowing software with the capability to understand the meaning of words, phrases, parts of speech, and concepts that are expressed textually in whatever language is “natural” to the intended system user. It is now possible to converse directly with a computer in much the same way we converse with human colleagues. Natural language systems are today in use in DOD to track ships at sea, organize and manipulate huge data bases, and bridge the gap between smart but otherwise crude expert systems — though it is important to realize that nearly all of these systems are “prototypes.”
Finally, there are vision and robotic systems that also exploit the incarnation of knowledge into software. Some vision systems are capable of interpreting objects and environments and acting accordingly while robotic sy~tems soon will be capable of performing rudimentary tasks in real time.
It is important to distinguish between the tools and techniques of AI and the substantive areas targeted by the AI R&D community. Tools and techniques consist of special purpose software languages, rules, semantic and inference networks, natural language processing, and even unique hardware systems. But not every area is vulnerable to these tools and techniques. There is currently a great debate raging between those that feel that AI can be applied to virtually all kinds of problem-solving and those who feel just as strongly about the limits of AI. This latter group believes that it is theoretically impossible to capture the essence of intuitive problemsolving in computer software, while the true believers insist that even the most complex problems can be modeled. What about submarining?
How much can AI help?
ARTIFICIALLY INTELLIGENT SUBMARINING
Where are the opportunities in submarining? There are at least four areas that might benefit from the selected application of AI. They include systems status monitoring and diagnosis, situation assessment, tactical operations, and planning and decision making.
Systems Status Monitoring and Diagnosis
There are any number of submarine systems that require constant monitoring. When they malfunction, corrective action must be taken. Unfortunately, there are sometimes not-so-good diagnosticians on board submarines. What if time and effort was devoted to culling the procedures used to diagnose and fix systems problems from the very best diagnosticians? What if their expertise was incarnated in software and accessible to experienced and inexperienced operators?
It is well within the capability of today’s state-of-the-art to capture and represent such expertise and to embed expert diagnostic systems on submarines — so long as the problem is selected carefully and genuine problem-solving experts exist (see below). Such expert systems might reduce the analytical burden on operators substantially, and permit them to predict systems and sub-systems failures long before they occur.
Intelligent systems might thus be developed to monitor internal systems, diagnose and predict faults, correct or compensate for selected faults, and even respond to emergencies. All that is necessary for these systems to be built, is access to expertise, time, and, of course, funding.
AI tools and techniques can make direct contributions to the interpretation of systems data that monitor the external environment, correlate sensor, sonar, and other information, and make assessments about the actions of adversary and friendly forces. The procedures that are now implemented manually or with the aid of unintelligent computer-based systems might very well lend themselves to knowledge-based processing. But note that we are not suggesting that expert systems replace on-board analysts and decision-makers, rather that AI assume some of the low-level analytical burden now placed on certain crew members and thereby free them to devote their expertise to more complicated problems.
AI tools and techniques can help with situation assessment via their ability to deal with uncertain or incomplete information through which they can generate probabilistic likelihoods about the nature and threat of the situation at hand. These likelihoods will not override the analyst’s judgments, but augment them, and permit him to play what/if sensitivity analyses with the expert system — to experiment with different assessment hypotheses in real time.
What might intelligent systems do for the undersea tactician? They might sort and prioritize threats and targets, recommend countermeasures, and support weapons employment. All of these tasks are within the reach of current knowledge-based systems technology. It is possible, for example, to develop systems that might discriminate among threats. Expert systems are under development to compute target values. Systems have been conceived that will match targets and weapons and assist in weapons employment.
The key to the design and development or these systems lies in the capturing or the expertise necessary to drive them. So long as experts can be round, and so long ae the problems are derined realistically and manageably, knowledge-based systems might soon orrer substantial support to their human counterparts.
Planning And Decision Making
At the highest, most complicated level are problems that require planning, re-planning, and decision-making under conditions of great uncertainty and stress. How can intelligent systems help here?
It is possible to develop crude planners, contingency planners, re-planners, and decision option generators/selectors. Selecting among competing tactical options — that may have been generated by an AI system — is much more dirricult than generating candidate strawmen. Ultimately it is the captain’s job to select -and derend — a decision to implement a specific option.
The difrerences among systems monitoring and diagnosis, situation assessment, tactical operations, and planning and decision-making should be evident. As we move up the complexity ladder, the prospects for knowledge-based systems application grow dimmer. While this is not to suggest that the applied potential of AI ends at tactical operations, it is to argue that there are planning and decision-making tasks that will be much more difficult to support with AI — or any analytical methodology for that matter. Time will tell if high level functions can be supported with intelligent systems; as this article is written, the jury is definitely still out.
AI, SUBMARINING, AND THE LIMITS TO GROWTH
While it may be difficult to build all of the systems described above, efforts must be made to make the systems that are deployed easier and easier to use. The use of natural language interfaces, interfaces capable of anticipating user queries, and displays with extra-wide communications channels must become commonplace if knowledge-based systems are to succeed.
There is no danger — immediate or otherwise ot AI systems replacing trained operators or experienced decision-makers. In fact, the whole notion of AI as a threat to operational personnel represents the wrong way to think about the applied potential of AI. AI represents yet another tool for the defense proble~solver, a tool that should be used to augment and amplify the expertise resident in prospective users, not replace it. The only exception to this rule of thumb involves the application of AI tools and techniques to very low-level, computationally intensive problems, tailor-made for AI — and that for far too long have burdened human analysts and operators.
There are, however, a number of issues and problems that will define the role of AI in submarining. They include problem “bounding,” the crisis of expertise, and the potential for new forms of information warfare as more and more intelligent systems are deployed.
Bounded Vs. Unbounded Problems
It is relatively easy to bound the diagnostic problem of a device. If a system malfunctions, there are only so many diagnostic possibilities. Even complicated systems have finite solutions. But as we move from simple system diagnostics to complicated tactical planning and decision-making we begin to move from bounded to unbounded problems.
The more unbounded the problem the more difficult the solution, and the correspondingly greater challenge to AI. Since warfare cannot be pre-defined against every possible contingent action and re-action, and since command is as much an art as it is a pseudo-science, it will be difficult to develop intelligent systems capable of inducting in real-time. It is important that our expectations about the efficacy of AI be held short of creative problem-solving — i.e. for the kind of problem-solving exhibited by commanders who have -never been trained to improvise, but who do it very, very well.
The Crisis of Expertise
Two kinds of expertise must be present to develop a knowledge-based system. The first is resident in the subject matter expert — in the fire control officer, the ASW analyst, the sonar operator, and captain, while the second is resident in the intelligent systems designer (usually referred to as the “knowledge engineer” in the systems design process). There are precious few of either. Before an expert system can be built, for example, an articulate expert must be found. This problem is subtle because there are far more self-proclaimed experts than there are experts with impressive empirical track records. Genuine expertise presumes a successful history and a consensus about, tor example, maneuver tactics. If twelve experts yield twelve solutions to the same problem, the domain is not ready for AI.
A relat~d problem to the shortage of subject matter expertise is the over-reliance upon but one or two experts who might communicate bad or incomplete knowledge. Similarly, it is difficult to know when you have captured enough expertise. The more unbounded the problem, the more difficult it is to know when to stop.
There is also a shortage of skilled knowledge engineers, the professionals who must elicit and represent expert knowledge. Here too we find a preponderance of self-proclaimed experts, though not nearly enough with applied experience. If knowledge-based systems design and development is to continue, more knowledge engineers must be trained.
Assuming that low-level and some mid-and high-level AI systems are eventually fielded, what new security challenges will we race? Precious little thought bas been devoted to the sabotage, theft, or alteration of knowledge bases. If access 1s gained to the rules that govern adversary behavior then the battle can be won. If the rules are altered to produce predictably incorrect decisions then the mission can be fulfilled, and if access to a critical knowledge base can be denied then it will be impossible for a commander’s unit to survive. Perhaps such possibilities are far-fetched; perhaps they are not. Regardless of their likelihood, some thought should be given to the new forms of information warfare that the application of AI will suggest -just in case.
This short article has attempted to introduce the key components of artificial intelligence and to map the applied potential of AI for submarining and, by implication, naval warfare. We have also tried to discuss the key issues surrounding the design, development, and deployment of intelligent systems. It is clear that tremendous opportunities exist above and below the sea for the application or knowledge-based expert, natural language, robotic and vision systems. It is also clear that AI is not a problem-solving panacea and the design of knowledge-based systems is not without problems. AI systems certainly present no threat to operators; the real challenge lies in creating environments where AI systems can augment human expertise without competing with or replacing it.
Stephen J. Andriola and Jon L. Boyes