Page images
PDF
EPUB

autonomy of craft will demand far more automated sequence control regimes, both on the ground and on the craft. This appears to be another topic closely fitting current AI work.

The sequencing task appears to progress as follows. A committee of scientists convenes and decides on some immediate science goals. These are then roughly mapped onto craft capabilities, with some preliminary consideration that the goals are feasible, consistent with one another, and so forth. A team of experts is given the general goals; then produces a general sequencing plan. The general plan is progressively mapped down to the individual command level, resulting in a sequence of primitive steps to be sent to the craft. Before it is sent, however, the sequence must be verified both manually and by computer simulation to (a) meet the science goals and (b) to preserve craft integrity in all respects (electrical, mechanical, thermal, logical). After the sequence has been scrutinized, it is sent a step at a time, with very careful attention to feedback from the craft to ensure successful completion of each step before proceeding to the next. In a mission with the relatively simple arm and TV facilities of Viking, the bottlenecks seem to be the code sequence verification step and the feedback loop in which the sequence is administered to the craft. The conception of plans, and their mapping onto craft capabilities do not appear to be the bottlenecks. However, in a more complex mission all phases of sequencing will be bottlenecks, if attempted using the same level of control techniques found in Viking.

One of the larger areas of AI, problem solving, is directly relevant to all phases of mission sequencing. This is the study of the logical structure of plans, and their automatic generation for complex sequencing tasks. The Study Group is again unanimous in its opinion that AI problem solving theory is largely ready for use by NASA in complex sequencing environments, both ground-based and on semi-autonomous craft. Putting more sequencing intelligence on the craft becomes increasingly attractive as ground-craft distances increase and effective communication bandwidth decreases.

The scenario of a semi-autonomous craft with onboard problem solving intelligence and a symbolic model of its own capabilities might go as follows. Scientists decide that a sample of reddish material spotted about 15 meters away should be analyzed by science package 21. Using graphics techniques, they draw an outline around the sample on the TV image. Using this outline to identify the object of interest, the onboard vision system converts the image data to coordinate data in its local coordinate frame. The vision system issues the goal of causing a piece of the sample located at the coordinate to be transported to the input hopper of science package 21, located at another known position. The navigation problem solver then generates a course, moves the craft to within arm's

distance of the sample, reaches, grasps, then verifies visually and by tactile feedback that a red mass exists in its grasper. It then plans an arm trajectory to package 21's input hopper, noting that the flap of package 13 is up, and must be avoided. After moving the sample to the hopper and ungrasping, it visually verifies that a red mass exists in the hopper, and no longer exists in the grasper. It turns on package 21, and reports back to ground.

Everything in this scenario is within the means of current or forseeable AI problem solving, manipulator, vision, and navigation technology. Its primary feature is that, because of a self-model and knowledge of problem solving strategies, the craft can do more science with less ground-based support in a given period of time. Furthermore, the advantages of such technology on any particular mission are miniscule when compared to the advantages NASA will derive from the underlying technology. Again, just as des Jardains has pointed out for the lower level aspects of mission operations, what NASA sorely needs is a mission independent repertoire of basic problem solving packages which can be molded around the automatic sequencing needs of each mission in a uniform way.

3.3 Recommendations

Up to this point, NASA has concentrated on those activities that, in a primary sense, result in successful missions. That is, NASA designs and builds the equipment required for spacerelated science. This includes ground-based control equipment and procedures, as well as the spacecraft and its support systems. The Study Group strongly feels it is essential that NASA begin to look at some metaissues of how to codify the knowledge it uses in primary development. AI research has shown that codification of the knowledge underlying the primary advances in a field can lead to a better understanding of the basic issues of the field. In NASA's case, the immediate and long-term payoffs from codification of existing knowledge about mission operations would be in increased automaticity, if the primary technologies underlying mission operations can then be handed over to the computer. As the computer assumes progressively more of the intelligent control functions, more ambitious missions become possible, each mission becomes cheaper, and the scientific community can be put in closer touch with the onboard science.

The Study Group's message to NASA is, therefore, that NASA is becomming more and more an information utility and less and less a space hardware enterprise. Because of this, NASA needs to begin new mission independent programs for managing information during a mission. The first step toward creating a metalevel (information-based, rather than hardware

based) technology within NASA is the development of a unified Mission Control Center, with the goal of increasing the mechanization and standardization of sequencing, data handling and delivery, and related protocols at the low levels of the system, and increasing the automaticity of the center at the higher levels by introduction of existing AI problem solving and symbolic modeling techniques.

To begin the development of such a reusable, modular, intelligent Mission Control Center, the Study Group makes the following recommendations.

1. That NASA look seriously at des Jardains' proposal and establish a mission-independent fund for supporting the development of a system such as DesJardains proposes.

2. That NASA create a special internal, cross-mission division whose primary charge is to interact with the AI community on issues of increased automaticity, using AI techniques, throughout NASA mission operations. The division would serve as a membrane through which theoretical AI and advanced computer science could flow into NASA to meet practical mission operations needs. The division would eventually become a missionindependent resource from which the mission planners for individual missions could draw advanced control techniques for their specific goals.

3. That NASA charge the new division with constructing symbolic models of mission operation, and applying those models in the organization of an intelligent software library for use in specific missions. This library would provide basic AI technological support for automating various aspects of specific missions. It would serve much the same function as a machine shop now serves; but rather than new experimental hardware, it would draw upon advanced AI and computer science to provide mission-specific software tools, ranging from symbolic models of a spacecraft to models of the scientific uses of information derived from the craft.

4. That NASA adopt and support one of the advanced AI programming languages (and related research machinery) for use by the AI division in its role as a NASA-wide advanced technique resource and information facility.

4. Spacecraft Computer Technology

The intent of this section is to discuss computer requirements for onboard spacecraft operations in future NASA missions. Space missions have special computer needs that do not pertain in ground use of computers. The special needs and

requirements that will be imposed on computers to meet scientific missions of exploratory space flights in the areas of fault tolerance, large scale integrated circuits, space qualification of computers, computer architectures, and research needed for space computers are discussed. Recommendations of actions to be taken by NASA are specified for each of these

areas.

4.1 Technological Need

Computers in outer space face severe architectural constraints that do not exist with respect to ground-based computer operation. Because of this, special considerations must be taken with space computers that do not necessarily generalize from ground experience. The aspects that require special attention are discussed below.

1. Power and weight constraints are important for space missions. Fortunately, work in large scale integrated (LSI) technology has played a major role in decreasing power and weight requirements for computers.

2. Hostile space environmental conditions require that the computer be shielded from radiation, extreme temperatures, mechanical stress, and other space conditions. Operational Requirements

[blocks in formation]

Scientific Needs

The scientific needs for space mission computers may vary greatly. Once a mission is approved and the science objectives are specified, it is necessary to analyze each scientific experiment to determine its needs for computation. Because of the development of microcomputer technology it is not unreasonable to place a small microcomputer into a scientific device to provide it with more intelligence. Hence, there will be a need for microprocessors.

To support devices which will be used to explore a celestial body, and which will exhibit "intelligent" behavior, large-scale computers will be necessary; that is, large, fast primary memory storage and backup storage devices will be required. Processing pictures, and developing detailed plans to permit robotic devices to operate in space so as to accomplish mission objectives given general guidance from ground, will be necessary. Large amounts of space and time are required to process real-time programs for robotics and machine intelligence.

4.2 State of the Art: Architectural

Alternatives for Space Operations

The use of computers for space missions has been evolving since the start of the space age. First-generation space missions essentially had no computers. Second-generation missions had centralized computers that performed all computations required by the mission. Third-generation computers are now being considered. Three different computer architectures can be considered for space operations: distributed microcomputers, centralized processor, and distributed networks of computers. Some of the advantages and disadvantages of each approach will be explored below.

Distributed Microcomputers. If one is to have many small devices with their own built-in intelligence via a microprocessor, then a distributed microcomputer configuration is highly desirable. Such a concept has many advantages both from a technological view and a management view. A distributed network should permit any microprocessor qualified for space to be interconnected to the system. The interface between modules should be simple as the devices should be relatively independent of one another. Hence, errors can be isolated to devices, and it should simplify design problems. A simple executive routine could be developed to control the devices.

There are some virtues to a distributed microcomputer approach:

1. Changes in software sent from the ground to enhance a device need not pass through extensive reviews as the change affects only one experiment. Hence, coordina

tion between experimenters and the various software will not, in general, be necessary.

2. Software needs to be developed primarily for small problems. The code will be short, and in most instances, will be written by one programmer. Hence, software can be verified and tested more readily than can large, complex software.

Some disadvantages of a distributed approach are:

1. Space, weight, and computer memory requirements may be larger than that for a centralized approach since memory and logic is not being shared.

2. "Intelligent devices" that have their own microprocessors cannot obtain more memory than initially planned for the space mission. There may be instances whereby information learned on the ground could cause new software to be developed for the device. However, unless the new software fits into the preplanned memory size, it will not be possible to make the change.

Centralized Processor. In a centralized processor system, all functions relative to "intelligent" devices are placed in one computing system. Devices may time-share the central processor so as to have the same effect of "intelligence" as with a distributed processor system in which the "intelligence" is built into the device with a small microprocessor. A centralized processor would have a dynamic storage allocation routine built into it to account for space required by separate programs.

are:

Some of the virtues of a centralized processor configuration

1. Large, fast memories become available for complex "machine intelligence" tasks such as picture processing, high resolution, and plan formation needed to permit robotic devices to explore terrestrial bodies in space.

2. "Intelligent devices" that time-share the central processor can have their "intelligence" augmented by new software since more core memory should be readily acquired from the dynamic storage allocation routine if needed.

3. Space and weight is saved since only one control logic is required for the single computer, and memory is shared. Some disadvantages are:

1. The executive routine for the central processor will be complicated and verification of the executive routine

will be more complex than for the distributed processor approach.

2. Changes in software made on the ground to enhance a device may require extensive coordination and testing on the ground before it can be approved and transmitted to the spacecraft.

Distributed Networks of Computers. In a distributed network of computers, tasks to be performed can be assigned to any of the computers in the network. Peripheral devices and memory in each of the processors can be shared. Many central processors permit parallel computing to take place. A virtue of such an approach is that if one central processor fails, computation can still continue since other processors can be used to perform the work, albeit at a reduced processing speed.

Some disadvantages of the approach are:

1. Complex executive routines are required to control and to transfer data between processors.

2. A considerable amount of time may be expended to simply manage the configuration than in performing work in support of the scientific mission of the flight.

Fault Tolerance. Computers sent into space must be robust. They must be able to operate in space even when malfunctions occur. Fault tolerance is an attribute of information processing systems that enables the continuation of expected system behavior after faults occur. Fault tolerance is essential to space missions as it is impossible to adequately test components of transistor-like devices on a single chip. A single computer would have hundreds of such chips.

Faults fall primarily into two fundamentally distinct classes:

Physical faults caused by adverse natural phenomena, component failures, and external interference originating in the environment.

Man-made faults caused by human errors including imperfections in specifications, design errors, implementation errors, and erroneous man/machine interactions.

Fault tolerance and fault-avoidance are complementary approaches to the fault problem. Fault avoidance attempts to attain reliable systems by:

[blocks in formation]

Use of thoroughly refined techniques for the interconnections of components and assembly of subsystems.

Packaging and shielding of the hardware to screen out expected forms of external interference.

Carrying out of extensive testing of the complete system prior to its use.

Fault tolerance of physical faults attempts to employ protective redundancy, which becomes effective when faults. occur. Several redundancy techniques are:

Fault masking to assure that the effect of a fault is isolated to a single module.

Fault detection to detect that an error has occurred so that a recovery algorithm may be initiated.

Fault recovery to correct a detected fault. Automatic recovery algorithms are essential for space flights since human intervention will not be possible.

Fault masking appears to be a good approach primarily for short missions that consist of several days duration. Both hardware and software controlled recovery systems are required for successful space operations.

Two techniques for realizing fault tolerance of man-made faults are:

[blocks in formation]

LSI Technology. Large scale integrated circuit technology has yielded relatively large processors on small chips. These devices are highly important for space technology. Today's high performance MOS microprocessor has the following features:

[blocks in formation]
[blocks in formation]

It is not clear, however, that such a fast device could be space certified in the near future.

Future high performance MOS microprocessors are likely to have the following features:

[blocks in formation]

In addition, it would have a large logical address space, multiprocessing capability, a language orientation, and a firmware operating system.

Space Qualified Computers. Space qualified computers appear to be lagging significantly behind ground-based computers both in speed and memory capacity. Specifications for a fault-tolerant space computer (FTSC) under development at the Raytheon Corporation are as follows:

[blocks in formation]

The system is expected to be triply redundant, where all modules are on single chips.

4.3 Recommendations

Digital computers onboard spacecraft have been playing an ever increasing role in NASA space missions. They are destined to play a dominant role in future space missions. The

miniaturization of computers that has revolutionized computers on Earth provides even greater opportunities for space missions. They will permit NASA to develop "intelligent" sensors and devices which permit information, rather than raw data to be acquired in space and be sent to Earth. Significant size computers can be developed which will permit robotic devices to be built and controlled using general plans developed on Earth. Such devices will permit the terrestrial exploration of remote bodies that cannot be explored by man.

Fault Tolerance and Hardware. Whereas the development of smaller, more powerful computers on chips will progress without support from NASA, these developments will not meet NASA needs for spacecraft. Ground computers do not require absolute fault tolerance. Because they are relatively inexpensive, chips can be replaced on the ground. This, however, is not possible onboard spacecraft, where faulttolerance is crucial to the success of a mission. Fault-tolerant hardware systems need to be supported both by NASA and the Department of Defense who are also concerned with computers onboard spacecraft. If funding were coordinated, it could benefit both organizations. Fault tolerance must proceed at two levels considering both hardware and software. At the current time, a major problem exists with respect to large scale integrated circuit technology. Because of their complexity, chips cannot be tested adequately now. Random logic chips (e.g., INTEL 8080) may have failure rates that are unacceptable for space use. The random logic makes it extremely difficult to test chips adequately.

[ocr errors]

A hierarchic, or top-down, approach to designing chips, rather than random design methods could increase chip reliability and permit easier testing. NASA should support efforts in hierarchic design, or other design techniques which will improve chip reliability and ease of testing. Until major developments are made by manufacturers in improving the reliability and testing of chips, NASA should plan to test its own wafers thoroughly before qualifying them for space. Testing performed by manufacturers on wafers has been, at best, poor. Planning for fault tolerant hardware must start at the inception of a space mission and must be a part of the mission management plan.

Fault Tolerance and Software. Fault tolerance is needed not only for hardware, but also for software. Because of a trivial software error, an entire space mission costing billions of dollars can be lost. By having intelligent devices with their own hardware and software, small programs, relatively easy to code, verify, and test can be developed. However, one cannot always guarantee small programs. Hence, a fault tolerant and software effort must be initiated at the inception of a mission and must be an integral part of the management plan. Software recovery procedures and algorithms to handle single

« PreviousContinue »