TABLE OF CONTENTS               


SECTION I

     1.  Architecture Evolution
     2.  Conventional Timesharing
     3.  Domain Processing
     4.  Governing Principles
     5.  High Level Design/Implementation
     6.  Advanced Concepts

SECTION II

     1.  System Environment Objectives
     2.  System Organization
     3.  Ring Network Protocol
     4.  32 Bit System Hierarchy
     5.  Node Organization
     6.  Bit Map Display

SECTION III

     1.  Processing Environment Objectives
     2.  System Name Spaces
     3.  System Relationships
     4.  Operating System Mapping
     5.  Memory Management Unit
     6.  Memory Management Unit - Protection/Statistics
     7.  Memory Management Unit - I/O Mapping
     8.  Paging System
     9.  Disk Structure
    10.  I/O Hierarchy
    11.  Stream I/O
    12.  Software Tools
    13.  Shell Programs
    14.  Compilation/Binding/Execution

SECTION IV

     1.  User Environment Objectives
     2.  User Name Space
     3.  Concurrent User Environment
     4.  Display Manager
     5.  User Environment

SECTION V

     1.  Summary of Key Points



I.1  ARCHITECTURE EVOLUTION:

This figure depicts the evolution of computer architecture 
over the past 20 years.  The center diamond at the top show
batch computing of the 1960's which is characterized by ver
little or no interactiveness and very little or nosharing of   
peripherals and data files.  In the late 1960's computer 
architecture evolved into two distinct forms.  On the one 
hand there was timesharing, which was intended for people 
who needed large machine architecture, but could sacrifice
certain degrees of performance and interactiveness.  Time- 
sharing systems are characterized by poor interactiveness but
very good sharing characteristics and large machine arch-
tecture.  On the other hand, batch evolved into a form called
dedicated minicomputers.  Minicomputers are characterized by 
good human interfaces, and very good performance, but lack 
in the sharing of peripherals and data among a community of
users.
The APOLLO DOMAIN system has evolved as a direct result of
improvements intechnology and is widely held to be the arch
-tecture of the 1980's.  It combines the good parts of both
timesharing and dedicated minicomputers, but eliminates the
disadvantages of both these earlier forms.  The APOLLO
DOMAIN system has good sharing capabilities provided by a
high speed network as well as interactiveness provided by a
dedicated computer available to each user.

I.2  CONVENTIONAL TIMESHARING:

In conventional timesharing, the access of common programs
and data files among a community of users is easily imple-
mented since all such files are centralized under the
control of a hierarchical file system.  Each branch in the
hierarchy consists of a directory of names corresponding
to files or other lower level directories.  A user communi-
cating via a conventional terminal expresses programs and
data files by typing the path name down the directory tree
(e.g.,//PROGRAMS/SORT).  On conventional systems, this
convenient method of expressing file namesonly works for
files which are centralized--file transfer utilities are
frequentlyrequired to access remote files on a conventional
network.                             

I.3  DOMAIN PROCESSING - ITS ESSENCE:

The essence of the APOLLO DOMAIN system is a high level of
interactive parallel performance of each user, and a local
area network with transparency so that acommunity of users
can coordinate their computing in a comprehensive manner.

The user's viewpoint of the APOLLO DOMAIN system is twofold:
First, a global hierarchical naming tree (whose syntax is
identical to that of centralized timesharing) spans the
entire distribution of files across the network; and,
second, multiple independent programs can be concurrently
executing, each displayed on an independent window on the
display terminal. 

Much of this document describes the internal architecture
of the APOLLO system that allows that allows these two
features to operate efficiently.  The advantages to appli-
cations are many fold and include the ability for many user
nodes to share single copies of programs and data files,
eliminating the overhead of file copying along with the
administrative burden of maintaining separate revision
levels.  The system provides the user with a robust set of
naming mechanisms that are independent of whatever adminis-
trate policies the user wishes to superimpose.


             GOVERNING PRINCIPLES

         o  DEDICATED CPU PER USER

         o  INTEGRAL WIDE BAND LOCAL NETWORK

         o  HIGH LEVEL DESIGN (ISP, VAS, PMS, INDEPENDENCE)

         o  USE OF ADVANCED TECHNOLOGIES
            (VLSI, CPU, WINCHESTER DISKS, etc.)


I.4  GOVERNING PRINCIPLES:

There are several principles that govern the design of the
APOLLO computer system.  First and foremost is the notion
of a dedicated CPU for each user.  Second, each user is
interconnected with a high performance local area network.
Third, the design of the architecture is based on high level
abstractions so that we may independently evolve lower
level components (such as the instruction set, or internal
buses) with minimum impact.  Fourth is the use of advanced
technologies,such as VLSI, Winchester disks, and high
density dynamic RAMs.

I.5  HIGH LEVEL DESIGN/IMPLEMENTATION:

The APOLLO system incorporates designs which are uniformly
advanced beyond those of conventional computers.  A
conventional computer is characterized by: (1) a machine-
level instruction set, or ISP; (2) a machine level address
space or avirtual address space, which is a measure of the
range of addressing that the computer can span; (3) the
processor memory bus organization, or PMS, including the
memory buses, the attachment or processors, the attachment
of multiple memoryunits and so on; and (4) the Input/
Output system of the computer, or I/O bus.

The APOLLO system is designed around higher level abstrac-
tions in each of these particular areas.  For example
rather than an instruction set, we offer a high-level
language implementation in PASCAL.  Similarly, instead of
a machineaddress space, we support a 96 bit network-wide
global object address space.  Our thinking here is that
objects are very large entities that are 32 bits in length
and whose location should be anywhere on the network.  This
96 network-wide object address space is the fundamental
system address in the APOLLO DOMAIN system and is designed
to accommodate various machine-level address spaces.  
Similarly, rather than designing the system around a 
processor memory bus organization, the APOLLO system is 
designed around a two address packet network.  This network
is used to attach computation units, peripheral units and
gateways to other systems.  It is the backbone of the 
system, allowing users to intercommunicate, to access shared
programs and data files, and for access to shared 
peripherals.  Finally, our I/O bus is not an integral part
of our internal system, but rather an IEEE proposed 
standard MULTIBUS which is externally available to users and
is widely acknowledged as a standard for small computers in
the computer industry.

MULTIBUS is a trademark of Intel Corporation, Santa Clara,
CA.



                   ADVANCED CONCEPTS     

           SYSTEM ENVIRONMENT
                 o NETWORK ORGANIZATION
                 o RING NETWORK PROTOCOL
                 o NODE ARCHITECTURE

           PROCESSING ENVIRONMENT
                 o NETWORK WIDE VIRTUAL MEMORY

                 o SHELL PROGRAMMING
                 o COMPILATION/BINDING/EXECUTION

           USER ENVIRONMENT
                 o USER NAME SPACE
                 o CONCURRENT PROCESSING
                 o BIT MAP DISPLAY MANAGEMENT


I.6  ADVANCED CONCEPTS:

Many advanced concepts have been applied in the DOMAIN
architecture.  They can be roughly broken down into three
general categories.  (1) those pertaining to the overall
system environment; (2) those pertaining to the program
environment; and (3) those pertaining to the user environ-
ment.  It is useful to point out certain features of the
DOMAIN system in each of these environments.

The APOLLO system environments is unique in the sense that
the architecture isbased on a network, not a central systems
architecture.  This network supports shared data and periph-
erals and is controlled by an object-oriented operating 
system that will be described in more detailed later.
The processing environment for the APOLLO system includes:
(1) a very large linear address space for virtual memory
management; (2) advanced concepts, such as stream I/O
which will be described later; and (3) new ideas, such as
shell programming which allows people to build procedures at
the command level.  

The user environment of the APOLLO DOMAIN system is radi-
cally different from conventionaly systems.  Rather than a
character-oriented dumb terminal, the APOLLO system offers
each user an integral bit map display.  This parallel
device allows many concurrent programs to be executing on
behalf of each individual user.  By dividing the display
into multiple independent window areas, the outputs of all
user programs can be displayed as they execute.


              SYSTEM ENVIRONMENT OBJECTIVES


           NETWORK MODULARITY
                 o WIDE PERFORMANCE RANGE
                 o HIGH AVAILABILITY

           RING NETWORK
                 o HIGH SPEED/LONG DISTANCE
                 o MULTIPLE TECHNOLOGIES

           MAXIMIZE NETWORK INTERACTIVENESS
                 o NO SUPERFLUOUS MESSAGE BUFFERING
                 o MAXIMUM DMA DATA RATES
    
II.1  SYSTEM ENVIRONMENT OBJECTIVES:

Network modularity was a principal design objective of the
DOMAIN computer system, providing a wide range in perform-
ance, a wide range in growth capability, and a wide range
in system level availability.  Modularity at the network
level allows users to incrementally expand their system by
themselves on their site, and without substantial program-
ming.  It means that they can replicate nodes to obtain very
high availability.  It further mean that the user's specific
application in the most cost effective way he chooses.  From
a manufacturer's point of view, network mdoularity signif-
icantly eases system maintenance, allowing the replacement
of entire nodes as well as the ability for one node to
diagnose another.

A second design objective for the DOMAIN system environment
was to incorporate a high performance coaxial local area
network.  Although our system is designed to accommodate
any two address packet transport mechanisms, the specific
implementation that APOLLO has chosen involves a ring
topology.  Rings have numerous advantages over alternative
approaches:  they generally allow higher data bandwidths and
longer distances, they allow migration to new technologies
such as fiber optics, they are very interactive allowing
very fast network arbitration, and finally they incorporate
a free acknowledgment function with the circulation of each
packet.

A third system environment objective was to minimize network
delays.  In this regard, our design eliminates all super-
flouous message buffering between nodes, allowing a message
generated from one process to be transmitted directly to 
another process on a separate machine.  Secondly, our net-
work controller transmits data through the block multiplexor
channel which allows all high performance DMA devices to
have access to the total memory bandwidth of both machines.
Consequently, when a message is transmitting from one
 machine to another, the data rate is at the maximum poss-
ible permitted by the two memory systems.


II.2  SYSTEM ORGANIZATION:

The system level organization of the APOLLO system is based
on the APOLLO DOMAIN network.  This network allows an
extremely wide range in performance, growth and system
availability.  Moreover, users attached to the system can
intercommunicate, can access shared programs and data
files across the network, can access common pools of periph-
erals, and can finally access remote facilities, including
large foreign machines or other DOMAIN systems.  Conse-
quently, the APOLLO DOMAIN network together with the per
user computing node is intended to provide an entire
computing facility to each user.


II.3  RING NETWORK PROTOCOL:

The APOLLO DOMAIN system is designed around a two address
packet transport network.  The specific implementation of
this network can take various forms, and the system is
specifically designed to be able to migrate from one form to
another as the technology requires.

The topology of the DOMAIN network is in the form of a
circular ring.  Access to this ring is arbitrated through
the passing of a TOKEN which is a specific encoding of bits
passed from one node on the network to another.  The system
allows one and only one TOKEN to be on the ring at any given
instant, and the possession of this single TOKEN gives a
particular node exclusive use of the network for the dura-
tion of a message transmission.  The format of the message
on the ring includes the destination node address, the
source node address, header information, data, a CRC check,
and finally anacknowledgment field is modified by the
destination node, thereby acknowledgingthe correct receipt
of the packet to the source node.
The encoding on the ring uses a conventional bit stuffing
technique whereby the occurence of five consecutive 1's
causes the insertion of a O on transmission and a corre-
sponding removal of the 0 upon receiption.  Several special
flag characters are used to establish packet synchronization
and are encoded as a string of six consecutive 1's followed
by two identifier bits.  One of these is the TOKEN, which
deviates from other flag characters by only the last bit, 
thereby allowing a node to exclusively acquire a TOKEN by
simply altering that bit.  This allows minimal buffering
in each node and therefore maximizes network responsiveness.


II.4  32 BIT SYSTEM HIERARCHY:

The APOLLO central processing unit is built around a VLSI
microprocessor with 32 bit architecture.  The instruction
set of the processor includes both 32 bit data types well
as a 24 bit linear virtual address space.  The physical
parameters of the system, most notably the width of the data
path, can be viewed in a hierarchical arrangement.  At the
system level computer nodes are inter-connected with a 1 bit
serial packet network.  Certain peripherals attached to an
individual computer node are interconnected with 8 bit (1
BYTE) data paths, whereas, the memory system and high
performance peripherals operate on a 16 bit data path.
Internal CPU registers and an arithmetic logic unit are
all implemented with full 32 bit data paths.  Consequently,
the CPU is generally 32 bits wide, the memory system is
generally 16 bits wide, while the network system only a
single bit wide.  The width of the data path varies
inversely with the physical distance from the internal
processing registers.



II.5  NODE ORGANIZATION:

The internal APOLLO node organization is comprised of
several key parts.  First, there is the central processing
unit comprised of multiple VLSI packages.  This central
processing unit is connected to memory management unit
which translates the 24 bit virtual address out of the CPU
parts: one for the CPU and another part for the I/O system which will be 
into a 22 bit physical address on the physical memory bus.
The memory management unit is actually comprised of two 
described later.  The memory system is comprised of multiple
units - each unit containing either 1/2 or 1 megabyte.  This
unit is fully protected error correction codes.  The memory
system is expandable to 3.5 megabytes.  The I/O system of
the APOLLO node is broken down into two parts.  The first 
part is for those peripherals that are integral to the 
APOLLO system, such as the Winchester disk and the network
controller.  These devices are connected to a block
multiplexor channel.  Other peripherals, such as user
supplied peripherals, line printers, magtapes and so on,
are connected to the optional MULTIBUS controller.

The use of a block multiplexor channel through which all
disk and network traffic goes represents an essential part
of the APOLLO system.  The system was designed to specifi-
cally maximize the node-to-node responsiveness across the
network.  We wanted to guarantee that there would be no
superfluous buffering of packet messagesas they left a trans-
mitting process and entered a receiving process on another 
machine; and, secondly, we wanted the transfer of this
packet to operate at near memory speeds.  To accomplish
this responsiveness we allow the network full (100%)
bandwidth access to primary memory, disallowing all other
block transfer such as the Winchester disk.  Consequently,
the disk actually share a common DMA channel into primary
so that both of these devices can transfer at data rates
of nearly 100% memory bandwidth.  Occasionally, a disk
transfer will overlap a network transfer requiring that
either device make one additional revolution.  But the
system level performance consequences of this interference
are neglibible.

Finally, the display system is comprised of a separate
autonomous 1/8 megabyte bit map memory which is organized
into a square array of 1024 bits on each side.
The display memory is constantly refreshed onto an 800 x
1024 bit map CRT.  There is a separate bit mover which is
capable of moving rectangles from one part of the display
onto another part of the display at a data rate of 32
megabits per second.

Although the display memory and the program memory are in
separate physica lbus organizations, they actually share
the same address space so that the CPU can instantaneously
access display memory and alter its contents.  Furthermore,
the bit mover can move display areas (rectangles) into and
out of program memory.  The system is designed so the CPU
can access program memory, the display memory can refresh
the CRT display, and the bit mover can be moving rectangles
all inparallel and without interference.



II.6  BIT MAP DISPLAY:

The bit map display system is comprised of a 1024 bit by
1024 bit array.  Arectangular region of 800 by 1024 re-
freshes CRT display.  The remaining area isused as temporary
storage for character font tables.  The bit mover is a hard-
ware primitive which is capable of moving a rectangular area
from any one place on the display to any other place on the
display.  This primitive is used to move windows into and
out of main memory, to move them relative to the display
itself, to implement scrolling, and to create character
strings from character fonts.  The bit mover operates at a
32 megabit per second data rate when moving entirely within
the display memory.
The bit mover can move bit-aligned rectangles from display
memory to/from word-aligned buffers in program memory where
the CPU can efficiently perform raster operations, such as
exclusive ORing two or more graphic representations.              
       
       
                PROCESSING ENVIRONMENT OBJECTIVES

           o 32 BIT OBJECT ADDRESS SPACE (NETWORK GLOBAL)

           o DEMAND PAGED I/O (NETWORK & DISK)

           o UNIQUE OBJECT NAMES (64 BIT UIDs)

           o PROCESS - PROCESS STREAMING

           o SHELL PROGRAMMMING

           o EFFICIENT COMPILING/BINDING/EXECUTION


III.1  PROCESSING ENVIRONMENT OBJECTIVES:

A principal objective in designing the DOMAIN system pro-
cessing environment was the generalization of common enti-
ties, like programs and data files, into a uniform abstrac-
tion which we call an object.  The totality of objects 
across a network forms a 96 bit virtual address space which
is implement a network wide virtual memory.  A third objec-
tive was to provide an environment for efficient process-to
-process streaming and the control of this streaming through
shell programs.  Finally, an efficient compiler, binding and
execution procedure runs interactive programs which are
available network-wide.


III.2  SYSTEM NAME SPACES:

We now turn to the operating system design in the APOLLO
DOMAIN system.  One way of viewing a complex system is to
describe the various name spaces that occur in the system.
First, there is the user global namespace, or what the user
would normally type at a terminal to execute a program or
access a datafile.  Second, there is the system global name-
space, or the namespace that the operating system uses at a
network level.  Third, there is an object addresss space,
which is 32 bits long and contains programs and files as wel
as other entities in the operating system which will be
described later.  Fourth, there is a process virtual address
space that represents an address in which a process executes
Fifth, there is the physical address space which represents
the amount of physical memory that can be placed on the
system.  Sixth, there is the network address space or the
maximum number of nodes that can be placed on the network.
And, finally, there is the disk address space or the maximux
bytes or pages that the disk can hold.

In the APOLLO system the user global namespace is syntac-
tically represented as a stream of characters separated by
slashes.  This actually represents a hierarchical tree
space which will be described later.  The system global
namespace is a 96 bit address space comprised of a unique
ID (UID) of 64 bits and an offset which is 32 bits wide.
The 64 bit UID is unique in space and time.  It is unique
in space in that it includes an encoding of the machine's
serial number and it is unique in time in that it includes
the time at which the name was created.  This guarantees
that for all time at which the future and for all machines
that APOLLO builds, no two machines will ever create the
same UID, hence the term unique ID.

UID's are names of objects.  Objects are used to hold
programs, files and various other entities in the APOLLO
system.  An object is a linear 32 bit address space, byte
addressable, and can be located generally any place on the
network.  Objects are the primary focus for the APOLLO
DOMAIN system and are cached into the process address
space provided by the processor.  This process address space,
while very large , is still considerably smaller than the
32 bit object address space.  Consequently, address regions
of an object address space are mapped into regions of a
process in much the same way that regions of physical memory
are frequently mapped into regions of a cached memory.  The
process address space is 24 bit virtual address which is
converted to a 22 bit physical address by memory management
hardware.  The unit of allocation in the physical address
space is 1024 byte pages.


III.3  SYSTEM RELATIONSHIPS:

The execution of a user command on the APOLLO DOMAIN system
involves many steps.  First of all the user types a command
which is translated by the naming serverinto a UID.  The UID
is a 64 bit address which identifies one particular object 
on the network.  These objects then are dynamically mapped
by the operating system into a process virtual memory.  Once
mapped no data is transferred until the CPU actually
requests it.  When a page fault occurs, the operating
system retrieves the requested pade from some disk struc-
ture across the network and transfer it into the physical
memory of the local processor.  It then sets up the memory
management unit to translate the virtual address into the
physical address of the requested page and then allows pro-
cessing to continue.  In this scenario are four areas of
interest.  First is the operating system mapping structure,
which maps object address spaces into process address
spaces. Second is the memory management hardware which
translates process virtual address spaces into physical
memory address spaces.  Third is the paging system which
transfers pages of physical memory into and out of the
memory system onto either local disk or across the network
to some remote disk.  And, fourth, is the disk structure
that physically relates objects onto disk data blocks.  
These circular relationships are dynamically managed by the
DOMAIN operating system.



III.4  OPERATING SYSTEM MAPPING:

The network global object spaces are mapped selectively
into a process virtual address space of a particular node. 
Once the mapping occurs no data is transferred until the
processor actually requests it.  Consequently, the mapping 
of a large address space from an object into a large region
of a process is a relatively inexpensive procedure.  The
objects, of course, are network wide; whereas, the pro-
cesses are all in a particular node running on behalf of a 
particular user.  The process address space is subdivided
into an area which is global to all processes and then
further divided into an area which is per process super-
visor and per process user.  This address space mapping
represents the only primitive in which processes can
relate to objects.  For the most part the operating system
and all higher level views of the system relate to objects
rather than processes, and consequently a great deal of
network transparency is attained.



III.5  MEMORY MANAGEMENT UNIT:

The memory management unit (MMU) is a piece of hardware
which translates the 24 bit virtual address spaces of the
CPU onto the 22 bit physical address in the DOMAIN node. 
The MMU works on 1024 byte physical page sizes and has
separate protection and statistics information for each
page.  There exists a separate entry in a page frame table
for each individual page so that when the hardware faults
out of the page frame table (i.e. cannot find a appropriate
requested page), an interrupt is taken to move the
requested page in from secondary storage.  The MMU is
actually a two level hierarchy, the page frame table being
at the highest level.  A lower level cache, called the page
translation table contains the most recently used pages and
acts as a speed up mechanism to search the pageframe table.

The translation of a virtual address into a physical
address proceeds roughly as follows.  The 24 bit virtual
address is broken down into three fields: (1) a high order
virtual page number, (2) a page number, and (3) a byte
offset within the page.  The 10 bit page number is used a
index table. The page translation table contains a 12 bit
pointer which points directly to the physical requested page
Concurrent to memory system beginning a memory request, this
12 bit pointer is also used to index into the page frame 
table from which the high order virtual page numbers are
checked.  If the check is okay, the protection is allowed
and the process ID agrees, then the memory reference pro-
ceeds uninterrupted.  If, however, there is no agreement
on any of these accounts, the memory request is aborted and
a search is made in the page frame table for all entries
corresponding to this particular value of page number.  All
possible values for this page number are linked together in
a circular list and the hardware automatically searches for
the requested page number until: (1) it finds it and
continues, or (2) does not find it and causes a CPU
interrupt.  If the requesting page is found in the page
frame table, the location within the page frame table is
updated to the so that subsequent references can proceed
without researching the page frametable.



III.6  MEMORY MANAGEMENT UNIT-PROTECTION/STATISTICS:

At each access to a page a set of rights (execute, read,
write) are checked as a function of a particular level that
the process is running at.  The protection hardware speci-
fies the particular rights at this level and all higher
levels.  There are two supervisor levels and two user
levels
.
The memory management hardware automatically records and
maintains certain statistics about the page access.  In
particular a bit is set every time a page is modified. 
The operating system nucleus scans these bits periodically
to maintain knowledge of the statistical usage of the pages
for the purpose of page replacement.



III.7  MEMORY MANAGEMENT UNIT - I/O MAPPING:

Peripherals on the MULTIBUS are mapped into the 22 bit
APOLLO physical addressbus by means of an I/O map.  The I/O
map consists of 256 page entries, each entry pointing to a
particular APOLLO page.  A peripheral on the MULTIBUS can 
generate a 16 bit word or byte address and have the high
order bits indexed into the page map and low order bits
indexed relative to the page.  In this way MULTIBUS
peripherals can directly address themselves into the
virtual memory of a process.



III.8  PAGING SYSTEM:

To implement the network wide virtual memory system, several
tables are maintained within the operating system nucleus.
As objects are mapped into process address spaces, entries
are made into the mapped segment table (MST).  When a CPU
fault occurs for that virtual address, the operating system
scans the active segment table (AST).  This table contains
a cache of pointers to the actual location of the pages, be
they in physical memory, on local disk or on are mote net-
work node.  In this way, objects that are logically mapped
into a process are being constantly swapped in and out of
memory across the network solely on a demand basis.



III.9  DISK STRUCTURE:

Objects are mapped onto physical disks via dynamic storage
allocation.  First of all disk structure contains a physical
volume label which is a list of pointers which point to
multiple logical volume labels.  The division of a physical
volume into multiple logical volumes means fixed partitions
can be created which do not compete for common storage. 
One can create a logical volume and guarantee it has a
certain minimum amount of allocation.
Each logical volume label contains a volume table of con-
tents map.  The volume table of contents is a list of all
of the object UID's in the volume, and for each object a
set of object attributes.  The object attributes consist of
the object type, access control information, accounting
information (last date accessed, last date modified), and a
map to all of the various data blocks which comprise the
object.  The map is comprised of 35 pointers.  The first 3
pointers point directly to data blocks each of which
consists of a single page.  The 33rd pointer points to a
block of second level which in turn point to actual data
blocks.  The 34th pointer expands into three levels of
storage and the 35th pointer expands into four levels of
storage.  Consequently, storage allocation is very
efficient for both large and small objects.

Each block contains not only 1024 bytes of data, but also
the UID and object page number that this page represents.
Consequently if a failure should occur, the entire mapping
structure can be reached by a single pass over all of the
data pages.


                       I/O HIERARCHY

           Language I/O     Industry Compatible,
                            System Independent

           Stream I/O       Object Type Independent,
                            Process-Process, File, Device,
                            etc.

           Mapped I/O       Object Location (Network Wide)
                            Independent.  Associates object -
                            process addressing only, No data
                            transferred until reference is
                            made.

           Page I/O         Physical I/O to Local and Remote
                            Disks across Network.  Data
                            transferred "On Demand," resulting
                            from CPU Page Fault.



III.10  I/O HIERARCHY:

There are four levels in the I/O system of the APOLLO DOMAIN.
The highest is the language level which is supported by the 
standard language constructs such as FORTRAN's read and write.
The implementation of this language level is done by the stream
level.  The stream level has the characteristic of being object
type independent and can accordingly talk to files, peripheral
devices, or to other processes.  The implementation of the
stream level is accomplished through the map primitives which
were described earlier.  The map primitives have the charac-
teristics of being object location independent thereby allow-
ing streams to operate across the network.  The mapped primi-
tive associates object to process addressing only.  No data is
transferred until the reference is made.  All data transfer
in the entire system occurs at the page level.  The page level
is the physical I/O to local and remote disks across the net-
work.  This data is transferred on demand, resulting exclusively
from a CPU page fault.


III.11  STREAM I/O:

The stream I/O level deals with the interconnection of objects,
including process to file operations, and process to process
operations.  It is object type independent.  Since streams are
implemented through the mapped I/O level, objects can be con-
ceptually interconnected by streams both within the same node
and across the network.

When streams are used to interconnect processes, the output of
one process is connected to the input of another process.  
This multiple process application can acquire the form of a
stream filter whereby every process forms some transformation
on its input and then passes the output to another process.
When applications are encoded in this manner, programmers are
encouraged to write processes as simple, modular programs that
perform some primitive function.


III.12  SOFTWARE TOOLS:

A large collection of program modules designed to perform
primitive functions has evolved over years of use by a large
collection of users.  These modules are referred to as Soft-
ware Tools and are widely distributed throughout the user
community.  Software Tools follows the methodology laid out in
the book entitled Software Tools by Kernigan and Plauger, pub-
lished by Addison Wesley.

Applications can be easily formed by interconnecting streams of
data through a collection of Software Tools.  In this way
complex applications can frequently by formed with little or no
programming.  The time required to develop a new application
is significantly reduced.  Furthermore, users are encouraged
to write programs that are small, conceptually simple, and
usable for many applications and by many users.


III.13  SHELL PROGRAMS:

A shell program is a higher level flow of control above the
conventional program level (e.g. Fortran or Pascal).  Shell
programs are written in a shell programming language that has
a rich set of constructs that are similar to a conventional
la
program frequently involves the complete execution of one or
more conventional programs.  In this regard, a shell program
cna be thought of as a sophisticated command processor which
coordinates the execution of multiple program steps.

The ability of users to program applications in a shell pro-
gramming language relieves a great deal of complexity that
would otherwise be required within a Fortran or Pascal pro-
gram.  Consequently, programs written in these languages tend
to be simpler and have fewer input options.

The concept of shell programming goes hand-in-hand with the
concept of Software Tools.  Here, the shell programs represent
the interconnect of streams between various programs, and can
be extended to richly interconnect small programs in order to
form complex applications.


III.14  COMPILATION/BINDING/EXECUTIION:

We now shift to the higher level organization of objects
related to user programs, compilers, binders and loaders.

The compiler translates a source program object into a compiled
object.  The compiled object has format which isa suitable for
direct execution if there are no unresolved references (i.e.,
no other subroutines which need to be bound together).  If the
application contains sevel source program objects, these com-
piled objects must be bound together prior to execution, a 
process accomplished by the BINDER.  The process of loading
and executing a compiled object consists of: (1) mapping the
pure position independent code into a region of process address
space, (2) creating an impure section of the process address
space, and (3) dynamically linking operating system references
to the operating system during execution.

There are three important points in this procedure: (1) The
output of a compiler can be directly executed if there are no
external references to be resolved; (2) a runnable object, once
formed, is paged into memory at run time, on demand; and (3) 
source program objects, compiled objects, and bound objects
can reside anywhere on the network.

The compiled object format is comprised of two parts:  The 
first major part is position independent code and pure data
which is directly mapped and executed into a process address
space.  The second part is a database used by the loader to
create an impure temporary data object which is subsequently
mapped into the impure part of a process address space.  Thus,
all DOMAIN code is inherently re-entrant.  This feature is
essential to efficient memory page management within a node
and contributes to efficient use of the DOMAIN network for
remote demand paging as well.


                USER ENVIRONMENT OBJECTIVES

           o UNIFORM NAME SPACE

           o BIT MAP DISPLAY (TEXT, GRAPHICS)

           o CONCURRENT PROCESSING PER USER

IV.1  USER ENVIRONMENT OBJECTIVES:

A key objective in designing the APOLLO user environment is to
combine simplicity and uniformity with a high degree of
functionality.

All objects that the system is capable of referencing can be
expressed in a uniform name space that spans the entire net-
work.  Further, bit map display is used to represent text and
graphics output.  The output from multiple programs can be
concurrently displayed through multiple windows, thereby
providing a degree of functionality unavailable on conventional
systems.


IV.2  USER NAME SPACE:

The namespace seen by a user is organized as a hierarchical
tree structure.  The root of the tree represents the most
global portions of the network.  The leaves at the bottom of
the tree represent particular objects, such as programs, files
and devices.  Intermediate branches are used to represent
collection of objects that have some common association.  For
example, an entire node on the network may be represented by
an entire subtree in the tree hierarchy.  The namespace hier-
archy represents a logical organization of the network.  All
leaves, or the lowest level of the tree, represent objects, and
the user has a variety of syntactical forms in which to express
the location of an object.  First of all there is the network-
wide syntax which is comprised of two leading slashes followed
by a full path name to reach the object.  Second, there is the
local root relative syntax which can be used to express objects
that are local to a particular user's node.  Syntactically
this is expressed by one leading slash followed by a relative
path name.  For convenience, the user may attach his working
directory to any point in the tree name hierarchy; and, con-
sequently, he may express a path name which is relative to his
working directory.  He does this by expressing the relative
path name without a leading slash.  Each branch in the tree
is represented as a directory object and contains a list of
of associates.  For each name at a lower level there is con-
tained within the directory either a UID or a path name.  If
it is a path name, the path name is syntactically substituted
into the name being searched and the search continues.  This



IV.3  CONCURRENT USER ENVIRONMENT:

The notion of concurrency on the APOLLO DOMAIN system is un-
available on conventional timesharing systems.  On these latter
systems users are generally required to execute one function
at a timne.  When a user switches from one function to another,
the context of the previous function is lost and has to be
subsequently recreated.  The APOLLO integral bit map display
provides the user with the capability of displaying multiple
windows simultaneously.  Each window can contain the output
of related or unrelated applications.  For example, one window
can contain the sequential output of a program while a second
window graphically displays the accumulated output of the same
program.  Similarly, program development, compilation, editing
and an on-line help system can all be concurrently displayed.

Consequently, the APOLLO system is designed to accommodate a
total user environment, which we believe always involves a 
number of concurrent functions.


IV.4  DISPLAY MANAGER:

The display manager represents the outer most layer of logic
within the APOLLO system - that which controls the relationship
among the many windows projected onto the CRT display.  Accordingly,
the APOLLO system adds two additional layers above the con-
ventional programming level.  As mentioned earlier, a program-
mable shell coordinates the activity of many programs.  The 
output of this shell is written into a virtual terminal, called
a PAD.  Portions of this PAD are displayed through a rectangu-
lar window which is then projected onto the CRT display.

The display manager permits multiple windows to be displayed
concurrently, each of which can be executing an independent
shell or command environment.  The philosophy of the display
manager is to allow programs to output data in a logical format,
while allowing the user to independently control what is
physically displayed.

The display manager is controlled by the use of function keys
on the user keyboard.  Pushing a function key causes the ex-
ecution (interpretation) of a user programmable sequence of
display manager primitives.  Consequently, the user can define
function keys to perform complex display manager functions.


IV.5  USER ENVIROMENT:

The APOLLO DOMAIN operating system creates a degree of inde-
pendence between application programs and what is actually
viewed on the display.  In particular, application programs
create pads.  The pads are independently windowed onto the CRT
display totally under user control.  Window images are super-
imposed on the pads and can be moved relative to the pad in 
either a horizontal or a vertical direction.  Window images
from various pads are stacked logically on top of the display
so that only the one on top is displayed.  Consequently the
user enviromment is actually a three dimensional volume: 800
bits going across, 1024 bits going down and many levels of
windows deep.  The user can also move window areas up or down
relative to the physical display  and finally can move window
areas into and out of the display relative to other window 
areas.

Programs create the pad by writing command and data sequences
through a stream.  The window image created by the display
manager from the pad can be placed anywhere in the CRT and can
be overlayed by other window images.  Window images contain 
lines and frames.  A line a single line sequence of characters
and has only one dimension.  A frame has two dimensions and
has a rectangular format.  It contains characters and/or graphic
data.  Finally, frames may also contain user created bit maps.
These bit maps may reside either within the pad or within a
separate user supplied object.  Pad information normally 
accumulates over the life of a process.  This allows a user to
scroll either in reverse or in forward directions over the
entire life of the process.  However, for efficiency sake
certain commands may be emitted from program to delete all or
part of the pad as appropriate.


V.1  SUMMARY OF KEY POINTS:

An APOLLO computer system is comprised of a number of high
performance dedicated computers interconnected over a local
area network.  Each of these nodes contains a large machine
architecture which implements a demand paged network wide
virtual memory system, allowing a large number of processes for
each user, each process having a very large linear virtual
address space.  Languages that run on the APOLLO system include
Fortran 77 and Pascal and are implemented to take advantage
of the machine's 32 bit orientation.

An object oriented network operating system coordinates the
user's access to network wide facilities.

Objects, representing programs and data files, etc., are inde-
pendent of their network location and can be accessed uniformly
by anyone on the system.

The user's display terminal is capable of displaying multi-font
text and graphics, and can be divided into multiple windows,
each displaying independent program output.

The APOLLO system is designed around high technology.  It 
incorporates VLSI CPU chips, a large capacity Winchester disk,
advanced communications, and high-density RAM technologies.