Page images
PDF
EPUB

projects can profit from the use of AI technologies. Since we feel des Jardains' proposal represents a large step in the right direction, and since we have a relatively clear picture of where the incorporation of AI techniques might significantly enhance des Jardains' proposal, we first summarize his ideas; then suggest how the concept can be extended by incorporating existing AI problem solving and representation technologies.

Des Jardains' Proposal. Des Jardains' proposal concentrates primarily on the concept of a reusable missions control system which automates major portions of several of the categories of mission operations above. He thinks in terms of a "payload operations cycle" in which requests for data or science from users are queued up and scheduled by missions planning, taking into account their priorities, sequencing demands, and the current state of the craft and its sensors. The output of the missions planner is an "as-planned payload activity timeline," which, when combined with parametric data indicating the craft's current state, yields a specific sequence of commands to the craft. Results of commands yield an "as-performed time line," which reports back to the data management phase of the cycle. This phase organizes raw data collected during specific intervals, correlates them with the as-performed time line, and with original user requests, then delivers the data to the user. An intelligent data management facility would also notice missing data and unfilled user requests, and act as a sort of ombudsman for all users, following up with its own requests to mission planning to fill unmet original user requests.

Des Jardains' proposal is essentially (1) to automate the mission-independent aspects of this data collection and delivery cycle (and its implicit sequencing) and (2) to provide a uniform facility for embedding mission-specific systems in the basic support system. Since the sources of information with which the system must deal are both technically and geographically diverse, the proposal takes the form of a computer network which des Jardains calls the payload operations computing cycle (POCC) net.

As des Jardains correctly points out, such a POCC net would solve a number of NASA's current problems relating to mission cost, turnaround time, and efficiency. Currently, in the absence of a uniform, reusable facility, each mission must develop its own special purpose systems which are not reliable and which often just barely work. Users often must suffer lengthy delays, and must pay individually for computer time that relates more to NASA mission management than to the science the user derives. Original mission support teams break up and leave, taking with them much of the esoteric mission specific knowledge, making it difficult to train new staff to support the mission for the duration of its lifetime. In short, time, money, and effort are wasted by not having a central

facility that serves as a large, automated backdrop of uniform computing resources useful to any specific mission.

AI Techniques: Mission Monitoring. The volumes of parametric data sent back from a craft are sampled, abstracted, and formatted for meaningful interface with human controllers. The role of a controller is to apply a knowledge of the mission's goals, the craft's capabilities and physics, and the current-phase phase of the mission in interpreting the data he sees. Our impression has been that this aspect of mission operations remains essentially unautomated, except possibly for continually improving sampling techniques, display technologies, and human interfaces. Our message to NASA is that this is an ideal area for increased automation from AI.

The key to automating this aspect of missions operations lies in the areas of symbolic modeling and representation, two of the pivotal areas of AI. Presently, the human's presence in the monitoring loop is required simply to make connections between what the human's symbolic model of the mission and craft say should be happening at any moment, and what is actually happening. In this role, the human monitor draws primarily upon his knowledge of cause-effect relationship, ones which are specific to the craft and others which are of a more generic nature. Because of what he knows about the current phase of the mission, he is able to compare the incoming parametric data with the expected conditions, supplied by his model. When anomalies arise, he can not only recognize them, but also use them in combination with his symbolic model to hypothesize the nature of the fault. He could then issue further diagnostic commands to the craft, commands that would remedy the fault, or commands to reconfigure and bypass it.

Such symbolic modeling, including representation of the craft, representation of cause-effect principles, symbolic simulation, and fault modeling and diagnosis, are favorite AI topics. Much of the best AI research has been carried out in these areas, and the Study Group feels that parts of this science are ready for transfer into NASA.

AI Techniques: Sequencing and Control. The Study Group heard reports of the agonizingly slow methods of controlling Viking. The process of conceiving, coding, verifying, and transmitting commands to the arm and science packages aboard Viking apparently took times measured in weeks, even for relatively modest operations. The Study Group appreciated the uniqueness of the first missions, and concurred that the procedures used were essential, given the importance and novelty of the Viking missions. However, as NASA proceeds with increasingly complex scientific missions, the increasing

autonomy of craft will demand far more automated sequence control regimes, both on the ground and on the craft. This appears to be another topic closely fitting current AI work.

The sequencing task appears to progress as follows. A committee of scientists convenes and decides on some immediate science goals. These are then roughly mapped onto craft capabilities, with some preliminary consideration that the goals are feasible, consistent with one another, and so forth. A team of experts is given the general goals; then produces a general sequencing plan. The general plan is progressively mapped down to the individual command level, resulting in a sequence of primitive steps to be sent to the craft. Before it is sent, however, the sequence must be verified both manually and by computer simulation to (a) meet the science goals and (b) to preserve craft integrity in all respects (electrical, mechanical, thermal, logical). After the sequence has been scrutinized, it is sent a step at a time, with very careful attention to feedback from the craft to ensure successful completion of each step before proceeding to the next. In a mission with the relatively simple arm and TV facilities of Viking, the bottlenecks seem to be the code sequence verification step and the feedback loop in which the sequence is administered to the craft. The conception of plans, and their mapping onto craft capabilities do not appear to be the bottlenecks. However, in a more complex mission all phases of sequencing will be bottlenecks, if attempted using the same level of control techniques found in Viking.

One of the larger areas of AI, problem solving, is directly relevant to all phases of mission sequencing. This is the study of the logical structure of plans, and their automatic generation for complex sequencing tasks. The Study Group is again unanimous in its opinion that AI problem solving theory is largely ready for use by NASA in complex sequencing environments, both ground-based and on semi-autonomous craft. Putting more sequencing intelligence on the craft becomes increasingly attractive as ground-craft distances increase and effective communication bandwidth decreases.

The scenario of a semi-autonomous craft with onboard problem solving intelligence and a symbolic model of its own capabilities might go as follows. Scientists decide that a sample of reddish material spotted about 15 meters away should be analyzed by science package 21. Using graphics techniques, they draw an outline around the sample on the TV image. Using this outline to identify the object of interest, the onboard vision system converts the image data to coordinate data in its local coordinate frame. The vision system issues the goal of causing a piece of the sample located at the coordinate to be transported to the input hopper of science package 21, located at another known position. The navigation problem solver then generates a course, moves the craft to within arm's

distance of the sample, reaches, grasps, then verifies visually and by tactile feedback that a red mass exists in its grasper. It then plans an arm trajectory to package 21's input hopper, noting that the flap of package 13 is up, and must be avoided. After moving the sample to the hopper and ungrasping, it visually verifies that a red mass exists in the hopper, and no longer exists in the grasper. It turns on package 21, and reports back to ground.

Everything in this scenario is within the means of current or forseeable AI problem solving, manipulator, vision, and navigation technology. Its primary feature is that, because of a self-model and knowledge of problem solving strategies, the craft can do more science with less ground-based support in a given period of time. Furthermore, the advantages of such technology on any particular mission are miniscule when compared to the advantages NASA will derive from the underlying technology. Again, just as des Jardains has pointed out for the lower level aspects of mission operations, what NASA sorely needs is a mission independent repertoire of basic problem solving packages which can be molded around the automatic sequencing needs of each mission in a uniform way.

3.3 Recommendations

Up to this point, NASA has concentrated on those activities that, in a primary sense, result in successful missions. That is, NASA designs and builds the equipment required for spacerelated science. This includes ground-based control equipment and procedures, as well as the spacecraft and its support systems. The Study Group strongly feels it is essential that NASA begin to look at some metaissues of how to codify the knowledge it uses in primary development. AI research has shown that codification of the knowledge underlying the primary advances in a field can lead to a better understanding of the basic issues of the field. In NASA's case, the immediate and long-term payoffs from codification of existing knowledge about mission operations would be in increased automaticity, if the primary technologies underlying mission operations can then be handed over to the computer. As the computer assumes progressively more of the intelligent control functions, more ambitious missions become possible, each mission becomes cheaper, and the scientific community can be put in closer touch with the onboard science.

The Study Group's message to NASA is, therefore, that NASA is becomming more and more an information utility and less and less a space hardware enterprise. Because of this, NASA needs to begin new mission independent programs for managing information during a mission. The first step toward creating a metalevel (information-based, rather than hardware

based) technology within NASA is the development of a unified Mission Control Center, with the goal of increasing the mechanization and standardization of sequencing, data handling and delivery, and related protocols at the low levels of the system, and increasing the automaticity of the center at the higher levels by introduction of existing AI problem solving and symbolic modeling techniques.

To begin the development of such a reusable, modular, intelligent Mission Control Center, the Study Group makes the following recommendations.

1. That NASA look seriously at des Jardains' proposal and establish a mission-independent fund for supporting the development of a system such as DesJardains proposes.

2. That NASA create a special internal, cross-mission division whose primary charge is to interact with the AI community on issues of increased automaticity, using AI techniques, throughout NASA mission operations. The division would serve as a membrane through which theoretical AI and advanced computer science could flow into NASA to meet practical mission operations needs. The division would eventually become a missionindependent resource from which the mission planners for individual missions could draw advanced control techniques for their specific goals.

3. That NASA charge the new division with constructing symbolic models of mission operation, and applying those models in the organization of an intelligent software library for use in specific missions. This library would provide basic AI technological support for automating various aspects of specific missions. It would serve much the same function as a machine shop now serves; but rather than new experimental hardware, it would draw upon advanced AI and computer science to provide mission-specific software tools, ranging from symbolic models of a spacecraft to models of the scientific uses of information derived from the craft.

4. That NASA adopt and support one of the advanced AI programming languages (and related research machinery) for use by the AI division in its role as a NASA-wide advanced technique resource and information facility.

4. Spacecraft Computer Technology

The intent of this section is to discuss computer requirements for onboard spacecraft operations in future NASA missions. Space missions have special computer needs that do not pertain in ground use of computers. The special needs and

requirements that will be imposed on computers to meet scientific missions of exploratory space flights in the areas of fault tolerance, large scale integrated circuits, space qualification of computers, computer architectures, and research needed for space computers are discussed. Recommendations of actions to be taken by NASA are specified for each of these

areas.

4.1 Technological Need

Computers in outer space face severe architectural constraints that do not exist with respect to ground-based computer operation. Because of this, special considerations must be taken with space computers that do not necessarily generalize from ground experience. The aspects that require special attention are discussed below.

[merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small]

Scientific Needs

The scientific needs for space mission computers may vary greatly. Once a mission is approved and the science objectives are specified, it is necessary to analyze each scientific experiment to determine its needs for computation. Because of the development of microcomputer technology it is not unreasonable to place a small microcomputer into a scientific device to provide it with more intelligence. Hence, there will be a need for microprocessors.

To support devices which will be used to explore a celestial body, and which will exhibit "intelligent" behavior, large-scale computers will be necessary; that is, large, fast primary memory storage and backup storage devices will be required. Processing pictures, and developing detailed plans to permit robotic devices to operate in space so as to accomplish mission objectives given general guidance from ground, will be necessary. Large amounts of space and time are required to process real-time programs for robotics and machine intelligence.

4.2 State of the Art: Architectural

Alternatives for Space Operations

The use of computers for space missions has been evolving since the start of the space age. First-generation space missions essentially had no computers. Second-generation missions had centralized computers that performed all computations required by the mission. Third-generation computers are now being considered. Three different computer architectures can be considered for space operations: distributed microcomputers, centralized processor, and distributed networks of computers. Some of the advantages and disadvantages of each approach will be explored below.

Distributed Microcomputers. If one is to have many small devices with their own built-in intelligence via a microprocessor, then a distributed microcomputer configuration is highly desirable. Such a concept has many advantages both from a technological view and a management view. A distributed network should permit any microprocessor qualified for space to be interconnected to the system. The interface between modules should be simple as the devices should be relatively independent of one another. Hence, errors can be isolated to devices, and it should simplify design problems. A simple executive routine could be developed to control the devices.

There are some virtues to a distributed microcomputer approach:

1. Changes in software sent from the ground to enhance a device need not pass through extensive reviews as the change affects only one experiment. Hence, coordina

tion between experimenters and the various software will not, in general, be necessary.

2. Software needs to be developed primarily for small problems. The code will be short, and in most instances, will be written by one programmer. Hence, software can be verified and tested more readily than can large, complex software.

Some disadvantages of a distributed approach are:

1. Space, weight, and computer memory requirements may be larger than that for a centralized approach since memory and logic is not being shared.

2. "Intelligent devices" that have their own microprocessors cannot obtain more memory than initially planned for the space mission. There may be instances whereby information learned on the ground could cause new software to be developed for the device. However, unless the new software fits into the preplanned memory size, it will not be possible to make the change.

Centralized Processor. In a centralized processor system, all functions relative to "intelligent" devices are placed in one computing system. Devices may time-share the central processor so as to have the same effect of "intelligence" as with a distributed processor system in which the "intelligence" is built into the device with a small microprocessor. A centralized processor would have a dynamic storage allocation routine built into it to account for space required by separate programs.

are:

Some of the virtues of a centralized processor configuration

1. Large, fast memories become available for complex "machine intelligence" tasks such as picture processing, high resolution, and plan formation needed to permit robotic devices to explore terrestrial bodies in space.

2. "Intelligent devices" that time-share the central processor can have their "intelligence" augmented by new software since more core memory should be readily acquired from the dynamic storage allocation routine if needed.

3. Space and weight is saved since only one control logic is required for the single computer, and memory is shared. Some disadvantages are:

1. The executive routine for the central processor will be complicated and verification of the executive routine

will be more complex than for the distributed processor approach.

2. Changes in software made on the ground to enhance a device may require extensive coordination and testing on the ground before it can be approved and transmitted to the spacecraft.

Distributed Networks of Computers. In a distributed network of computers, tasks to be performed can be assigned to any of the computers in the network. Peripheral devices and memory in each of the processors can be shared. Many central processors permit parallel computing to take place. A virtue of such an approach is that if one central processor fails, computation can still continue since other processors can be used to perform the work, albeit at a reduced processing speed.

Some disadvantages of the approach are:

1. Complex executive routines are required to control and to transfer data between processors.

2. A considerable amount of time may be expended to simply manage the configuration than in performing work in support of the scientific mission of the flight.

Fault Tolerance. Computers sent into space must be robust. They must be able to operate in space even when malfunctions occur. Fault tolerance is an attribute of information processing systems that enables the continuation of expected system behavior after faults occur. Fault tolerance is essential to space missions as it is impossible to adequately test components of transistor-like devices on a single chip. A single computer would have hundreds of such chips.

Faults fall primarily into two fundamentally distinct classes:

Physical faults caused by adverse natural phenomena, component failures, and external interference originating in the environment.

Man-made faults caused by human errors including imperfections in specifications, design errors, implementation errors, and erroneous man/machine interactions.

Fault tolerance and fault-avoidance are complementary approaches to the fault problem. Fault avoidance attempts to attain reliable systems by:

[blocks in formation]
[blocks in formation]

Fault tolerance of physical faults attempts to employ protective redundancy, which becomes effective when faults occur. Several redundancy techniques are:

[blocks in formation]

Fault masking appears to be a good approach primarily for short missions that consist of several days duration. Both hardware and software controlled recovery systems are required for successful space operations.

Two techniques for realizing fault tolerance of man-made faults are:

[blocks in formation]

LSI Technology. Large scale integrated circuit technology has yielded relatively large processors on small chips. These devices are highly important for space technology. Today's high performance MOS microprocessor has the following features:

[blocks in formation]
« PreviousContinue »