Page images
PDF
EPUB
[blocks in formation]

It is not clear, however, that such a fast device could be space certified in the near future.

Future high performance MOS microprocessors are likely to have the following features:

[blocks in formation]

In addition, it would have a large logical address space, multiprocessing capability, a language orientation, and a firmware operating system.

Space - Qualified Computers. Space qualified computers appear to be lagging significantly behind ground-based computers both in speed and memory capacity. Specifications for a fault-tolerant space computer (FTSC) under development at the Raytheon Corporation are as follows:

[blocks in formation]

The system is expected to be triply redundant, where all modules are on single chips.

4.3 Recommendations

Digital computers onboard spacecraft have been playing an ever increasing role in NASA space missions. They are destined to play a dominant role in future space missions. The

miniaturization of computers that has revolutionized computers on Earth provides even greater opportunities for space missions. They will permit NASA to develop "intelligent" sensors and devices which permit information, rather than raw data to be acquired in space and be sent to Earth. Significant size computers can be developed which will permit robotic devices to be built and controlled using general plans developed on Earth. Such devices will permit the terrestrial exploration of remote bodies that cannot be explored by man.

Fault Tolerance and Hardware. Whereas the development of smaller, more powerful computers on chips will progress without support from NASA, these developments will not meet NASA needs for spacecraft. Ground computers do not require absolute fault tolerance. Because they are relatively inexpensive, chips can be replaced on the ground. This, however, is not possible onboard spacecraft, where faulttolerance is crucial to the success of a mission. Fault-tolerant hardware systems need to be supported both by NASA and the Department of Defense who are also concerned with computers onboard spacecraft. If funding were coordinated, it could benefit both organizations. Fault tolerance must proceed at two levels - considering both hardware and software. At the current time, a major problem exists with respect to large scale integrated circuit technology. Because of their complexity, chips cannot be tested adequately now. Random logic chips (e.g., INTEL 8080) may have failure rates that are unacceptable for space use. The random logic makes it extremely difficult to test chips adequately.

A hierarchic, or top-down, approach to designing chips, rather than random design methods could increase chip reliability and permit easier testing. NASA should support efforts in hierarchic design, or other design techniques which will improve chip reliability and ease of testing. Until major developments are made by manufacturers in improving the reliability and testing of chips, NASA should plan to test its own wafers thoroughly before qualifying them for space. Testing performed by manufacturers on wafers has been, at best, poor. Planning for fault tolerant hardware must start at the inception of a space mission and must be a part of the mission management plan.

Fault Tolerance and Software. Fault tolerance is needed not only for hardware, but also for software. Because of a trivial software error, an entire space mission costing billions of dollars can be lost. By having intelligent devices with their own hardware and software, small programs, relatively easy to code, verify, and test can be developed. However, one cannot always guarantee small programs. Hence, a fault tolerant and software effort must be initiated at the inception of a mission and must be an integral part of the management plan. Software recovery procedures and algorithms to handle single

and multiple failures are required, and need considerable research. A systematic effort is needed for error detection and recovery algorithms for space computers. Fault-tolerant hardware and software for space computers is still in its infancy and needs considerable support from NASA.

Computer Architecture. There is no one computer architecture uniquely suited to all NASA's needs. The particular architecture for a specific mission will depend upon the mission objectives. The three architectures discussed in Subsection 4.2, all have advantages and disadvantages. The distributed processor concept and large central processors are useful architectures and should be considered for near and future term space missions. However, the distributed network of computers requires considerably more research to determine its applicability to space operations. Because much is still not known about the control of distributed networks on groundbased systems, this type of architecture is not realistic for a Mars 1986 flight which would include a robotic device. A distributed processor concept is attractive from a management view of space computing. It provides for separation of functions. It is particularly useful for missions on which "intelligent" devices and sensors have special timing requirements that cannot be fulfilled by a central processor.

Missions that require robotic devices will require large central processors. Because of weight and space limitations, "intelligent" devices should be reviewed carefully on such missions to determine if their needs could be met by the central processor. If this is possible, then the central processor should be shared to service the device. Trade-off studies will be needed to determine the role of the central processor and the number of “intelligent" devices that will meet the spaceweight restrictions. Computers are destined to play essential roles in space missions. They will have a major impact on "intelligent" devices and sensors. Exploration of terrestrial bodies by robots can be possible only with adequate computer hardware and software. NASA must place greater stress and funds into the support of space-board computers, fault tolerant techniques and systems, and software support for future space missions.

5. Computer Systems Technology

This section addresses the use of computer technology throughout NASA. We will review the rapidly evolving state of hardware technology and describe its implications upon the practicality of machine intelligence.

5.1 Introduction

With computer technology so central to the organizations's mission, and consuming such a large percentage of its

resources, one would expect to find, a massive research and development program to advance this technology and thereby further its mission objectives. Yet we have found scant evidence of NASA innovation within this field, and strong indications that it is not even adequately adopting technology developed elsewhere. As an indication of this lack of innovation, though it is certainly not conclusive evidence, at the most recent AIAA Computers in Aerospace Conference (1977) sponsored in part by NASA, only four of the eighty-six papers presented (less than 5%) were by NASA Headquarters or NASA centers people.

5.2 State of the Art: Computer Systems

In the workshop deliberations of this Study Group several trends within NASA have become quite apparent which may seriously proscribe the potential benefits available from spacecraft based machine intelligence. It is therefore important to identify these trends, uncover their basic cause, and suggest alternative cures which preserve and enhance the opportunities to utilize machine intelligence. These same trends also exist, though to a lesser extent, for ground-based systems, and hence have broad applicability throughout the agency.

NASA Missions Are Engineered and Preplanned to Minimize Dependence on Autonomous Operations. Because of NASA's no-fail philosophy for missions, an extremely conservative force is applied to mission plans and objectives. All aspects of the mission are carefully thought out in minute detail and all interactions between components meticulously accounted for. Besides increasing mission planning costs and lead time, the resulting plans are extremely inflexible and are incapable of having experimental components. As an example of this approach, the Mars rover mission reduced the need for autonomous control to local obstacle avoidance within a 30-meter path. The rest of the control was provided via ground supplied sequencing produced on an overnight basis. As a result half of the available picture bandwidth was devoted to pictures for the ground based path planning function rather than for science content, and no autonomous capability was provided to photograph and/or analyze targets of opportunity not selected in the ground-based plan. Similarly, in groundbased systems, we found evidence that investigators working in data reduction were not able to use the most advanced technology available because the NASA monitors were not convinced that it was 100% reliable. Instead, a proven, but obsolete, method requiring much more user intervention was chosen because of NASA excessive conservatism and because the concept of experimental upgrade of baseline capabilities has not been embraced. Clearly what is needed is a new mission planning model which establishes minimum baseline capabilities for mission components, enables use of enhanced versions, and provides protection from component malfunc

tion with automatic reversion to baseline capabilities. While such redundancy is commonplace in hardware, similar software redundancy, especially utilizing enhanced "experimental" versions is quite novel, but technically feasible within a properly constituted operating system. Developing such a capability is part of our recommendations below.

Increased Use of Distributed Computations. There appears to be a strong push for correlating software function with hardware modules, so that software separation is paralleled by hardware separation. This tendency seems to be predicated on the current inability to separate and protect software modules from one another except by placing them in separate hardware units.

The cost of this practice is to preallocate computing resources on a fixed basis rather than dynamically allocate them from a common pool. This results in underutilization of the computing resource, reduced capability, and/or decreased system flexibility. Current machine intelligence systems require large allocations of resources, but they only utilize them intermittently. Since such machine intelligence systems will initially be experimental they are less likely to justify fixed allocation of the resources only occasionally required.

The benefits of increased utilization of dynamically allocated resources could be realized if protection mechanisms enforcing separation of software modules required above for "experimental" versions and a resource allocator (a standard part of operating systems) existed.

Use of Standardized Computer Hardware. As part of NASA's standardization program, standards are being set for onboard spacecraft. This is intended to reduce costs by avoiding new hardware development and space qualification efforts, decrease development time by ensuring the early availability of the hardware, and increase reutilizations of existing software. However, since hardware technology is changing faster than the mission launch rate, standardization results in the use of obsolete hardware. This limits the resources available to any machine intelligence system. Development of software portability, or equivalently hardware compatability in a family of machines, would mitigate all of these problems except for the time and cost of space qualification.

Long Lead Times Required by System Integration. Currently all software, like all hardware, must be created, debugged, and integrated many months before mission launch to ensure proper spacecraft functioning. But unlike hardware, software can be modified after launch via telemetry. This is especially important in long-life missions lasting many years. During that period, as the result of increased scientific

knowledge, better software development techniques, and/or changed mission objectives, there may well be a need to modify and/or update the onboard software.

The benefits would be reduced lead time for software and an increased flexibility in mission objectives. This capability could be quite critical to early utilization of maturing machine intelligence technology. This notion is equally applicable for ground-based systems which may be utilized long after the mission launch date. They too must be capable of being upgraded, modified, and/or supplanted by experimental capabilities during mission operations. The basis for such capability is a centralized pool of computing resources with dynamic allocation and a protection mechanism for system integrity. It should be noted that this notion has already been incorporated into the Galileo mission plan (though for cost rather than flexibility reasons) in which the spacecraft software will be delivered after launch.

Desire to Minimize Onboard Software Complexity. This is part of NASA's larger effort to minimize spacecraft complexity to increase reliability. As above, special recognition of software's unique characteristics must be made. Otherwise onboard capability will be unnecessarily restricted. The minimized complexity criteria should be applied to only the baseline software and the protection mechanism, for this is the only portion of the spacecraft software relating to reliability, rather than the entire package of "enhanced" modules. With such an approach, capabilities, including experimental machine intelligence systems, could be incorporated in spacecraft without compromising reliability.

Central to each of these spacecraft trends is the notion that software is an ill understood and difficult to control phenomenon. It must therefore be managed, restricted, and carefully isolated into separate pieces. This notion has, in the past, been all too true, and its recognition has been manifest in the trends described above. However, current experience with timesharing systems have developed techniques, combining hardware facilities and their software control, for providing separate virtual machines to several processess. Each virtual machine while protected from the others shares a dynamically allocated pool of resources (such as memory, time, and bandwidth) which may include guaranteed minimums. With such a capability, simulating separate machines via a hardware/ software mechanism, all of the reliability advantages of separate machines are retained while the flexibility of dynamic resource allocation are also achieved. With proper software design these capabilities could be built into a general facility for incremental replacement of baseline modules by enhanced, and possibly experimental, versions with automatic reversion to the baseline module upon failure of the enhanced module.

5.3 Computer Systems Development

Recommendations

We recommend a virtual machine and software-first approach to system development:

1. For both ground and spacecraft software, that NASA develop a virtual machine approach to software development in which protection between processes is maintained by the operating system which also allocates resources as required with certain guaranteed minimums.

2. That within such a facility provisions be made for supplanting modules with upgraded or "experimental" versions. The operation of such modules will be monitored automatically and/or manually and upon failure will be automatically replaced by the reliable baseline module.

3. That NASA adopt a "software first" approach so that hardware can be supplied as late as possible (to take advantage of the latest capabilities). To support such an approach, either software portability (for newly developed code) or compatible machine families must be provided.

6. Software Technology

This section makes recommendations concerning the use of machine intelligence to further the production and maintenance of software throughout NASA. In addition, we will strongly recommend increased utilization of (non-machine intelligence) computer science to improve NASA's current capabilities in software.

6.1 Introduction

NASA is basically an information organization. Its mission is to collect, organize, and reduce data from near- and deep-space sensors into usable scientific information. Computers are obviously essential to this mission as well as to the launch and control of the spacecraft involved. Annual computer expenses, for both hardware and software, represent about 25% (?) of NASA's total budget. Compared with other users of computer technology, such as military and commercial organizations, NASA appears to be merely a state of the art user. But compared with the programming environments found in universities and research institutes from which this Study Group personnel panel was drawn, there is a world of difference. The technology lag represented by this gap is not NASA's responsibility alone, but is indicative that an effective technology transfer mechanism does not yet exist

within the computer field. NASA would do well for itself, and set a fine example, to remedy this.

There are two main issues we wish to cover in this section. The first concerns characterizing the state of software development within NASA, comparing it to the advanced software development facilities available in selected universities and research institutes, and outlining a short-term plan to effectively transfer this technology into the NASA environment. Secondly, there exists some preliminary, but far from practical machine intelligence work on automating various parts of the software development process. We will briefly examine this work and its potential for NASA; then suggest an appropriate role for NASA in this field.

6.2 State of the Art

Software Development within NASA. With rare exception, NASA software is developed in a batch environment. Often the medium is punched cards. Programs are keypunched and submitted. Results are obtained from line-printer listings hours or even days later. The only debugging information provided is what the programmer explicitly created via extra statements within the program. Deducing the cause of a failure from the debug evidence produced is a purely manual operation. Changes are made by keypunching new cards and manually merging the corrections with the program. Then the cycle is repeated. In some NASA environments, cards have been replaced by card images stored on a file and corrections are made with an online editor, but the process is essentially the same. Only the keypunching and manual manipulation of cards has been supplanted. The programs are still developed and debugged in a batch mode.

Software Development in Machine Intelligence Laboratories. In striking contrast to the NASA program development environment is that existing at several laboratories (such as CMU, MIT, Stanford, BBN, ISI, SRI, and Xerox) working on machine intelligence. This environment is characterized by being totally online and interactive. The heart of this environment is a fully compatible interpreter and compiler and an editor specifically designed for the language and this interactive environment. The remarkable thing is that this environment is based neither on machine intelligence mechanisms nor concepts, but rather on a machine intelligence philosophical commitment to flexibility and a few key computer science ideas (that programs can be manipulated as normal data structures and that all the mechanisms of the language and system must be accessible so that they too can be manipulated).

These key ideas and a long development by many talented people, have created an unrivaled software development

environment. In it, changes to programs are automatically marked on reformatted listings, the author and date of the changes are recorded, the correspondence between source and object modules is maintained, automatic instrumentation is available, non-existent code is easily simulated for system mock-up, extensive debugging and tracing facilities exist including interactively changing the program's data and restarting it from any active module. In addition, arbitrary code can be added to the interface between any two modules to monitor their actions, check for exceptional conditions, or quickly alter system behavior. Also an analysis capability exists to determine, via natural language, which modules call a given one, use a variable, or set its value.

There are many other capabilities far too numerous to mention but the key issues are that they are fully integrated into an interactive environment and that a commitment has been made to substitute computer processing for many of the programmers' manual activities. As computer power becomes less and less expensive (By 1985, according to a CCIP-85 study, hardware will represent only 15% of the cost of a computer system. The rest will be cost of people producing the software.) while people get more expensive, such a policy must clearly predominate. Furthermore, several studies have shown that software quality and cost improve as the number of people involved decreases. Thus, environments which improve programmer productively by automating certain functions also improve quality while reducing costs.

6.3 Software Development

Recommendations

For these reasons, we recommend:

1. That NASA immediately undertake a program to recreate within NASA the interactive programming environment found in various machine intelligence laboratories for some NASA language.

2. That NASA consider creating a modern dataencapsulation language (of the DODI variety) as the basis for this interactive facility.

3. That NASA only undertake this project with close cooperation of an advisory group drawn from these laboratories and with NASA personnel familiarized with these interactive environments via extended onsite training visits (approximately 6 months duration)

4. That NASA acquire the necessary computer hardware to support such an environment.

Automatic Programming. Having dealt with the current state of NASA's software production and its improvement through utilization of existing computer science technology, the central issue of utilizing machine intelligence for software production can now be addressed.

Software is essential to NASA's mission. It is used to launch and control spacecraft, to collect, reduce, analyze, and disseminate data, and to simulate, plan, and direct mission operations. The other sections of this report address extension of these capabilities through incorporation of machine intelligence in these software systems. Here, holding the functionality of the software constant, the use of machine intelligence to produce, or help produce, the software is considered.

Even with the capabilities suggested above for improving the production and utilization of software, the development of software is still largely a manual process. Various tools have been created to analyze, test, and debug existing programs, but almost no tools exist which aid the design and implementation processes. The only available capabilities are computer languages which attempt to simplify the statement of a finished design or implementation. The formulation of these finished products is addressed only by a set of management guidelines. As one can imagine, these manual processes with only minimal guidelines, unevenly followed, are largely responsible for the variability currently found in the quality, efficiency, cost, and development time of software.

It is quite clear that significant improvements will not occur as the result of yet "better" design and implementation languages or "better" guidelines, but only by introducing computer tools which break these processes down into smaller steps, each of which is worked on separately and whose consistency with each other is ensured by the computer tool.

This approach defines the field of automatic programming. It is based on machine intelligence technology and, like other machine intelligence systems, it is domain specific. Here the domain is the knowledge of programming: how programs fit together, what constraints they must satisfy, how they are optimized, how they are described, etc. Programming knowledge is embedded within a computer tool which utilizes the knowledge to automate some of the steps which would otherwise have to be manually performed. There is considerable diversity of opinion over the division between manual and automated tasks.

The critical issues, however, are that the unmanaged manual processes of design and implementation which currently exist only in people's heads and, hence are unavailable and unexaminable, have been replaced by a series of smaller explicit steps, each of which is recorded, and that some

« PreviousContinue »