by: Uday S. Murthy, Ph.D., ACA and S. Michael Groomer, Ph.D., CPA, CISA

Technology: Hardware and Software

Learning Objectives

After studying this chapter you should be able to:

Systems theory suggests that every information system have the following components: input, processing, storage, and output. In this chapter of the Technology Primer we focus on the technology of information systems in terms of the hardware and software. Hardware components can be broadly categorized into input, processing, storage, and output technologies. Software will be discussed under three classifications -- systems software, programming languages and applications software. 

This chapter is somewhat lengthy and contains what might seem to be a fairly esoteric discussion of technical issues surrounding personal computer (PC) hardware and software. However, knowledge of these technical issues and the related vocabulary is important for two reasons. First, you may very likely be called upon to provide advice to your employer or a client regarding hardware and software issues. You need to be prepared and able to accomplish this mission. Second, information systems professionals speak the language of computers. If you do not know this language, you will be unable to communicate with them effectively. In order to make this point clearer to you, visit the web sites of Gateway or Dell Computers. These are two of the largest suppliers of made-to-order PCs in the world. As you explore the various PC configurations, you are immediately forced to deal with considerable technical jargon.  If you are already familiar with terms like "SDRAM" or "AGP" or "Ultra ATA" then you may not find this chapter very enlightening.  For the vast majority of you, we suspect that you are not conversant with these terms.  The purpose of this chapter is to explain these and many more technical terms.  If you are not conversant with the vocabulary, then the recommendations you might make to your employer or a client could be seriously flawed, potentially causing the client many thousand dollars in losses.


As indicated above, hardware can be classified into the following broad categories: input, processing, storage, and output. Input technologies are used to convert data into computer readable form, either automatically or through varying degrees of human involvement. Processing technologies are contained within the "black box" itself and are used to convert raw data into meaningful information. Data storage technologies are employed to either temporarily or permanently store data. Finally, output technologies come into play in making information available to the end user. Conventional "hard copy" output in the form of paper, as well as "soft copy" output on the computer screen are two of the most common output options used in computer-based information systems. There are a number of recent developments in hardware which have revolutionized input, processing, storage, and output technologies. Multimedia technologies in general, and optical disks or CD-ROM in particular, have become extremely popular while their costs continue on a downward spiral. Note that the discussion of hardware that follows focuses primarily, although not exclusively, on microcomputer technology simply because you will very likely be interacting with microcomputers either in a stand alone or a networked environment.

Input technology

There are a number of technologies available for entering data into computer systems. Older technologies, some of which are still in use, require extensive human involvement. Newer technologies for data input require less extensive human involvement. Some new technologies almost entirely automate the process of converting data into computer-readable form.

An example of a relatively old technology is a keying device used for data entry. There are some variations of keying devices but they all involve manual entry of data using a keyboard attached to some device. The keyboard could be attached either to a tape device or a disk device. Additionally, the keying device may or may not be directly connected to the computer's central processing unit (CPU). When the keying device is connected to the CPU then data entry is said to occur on-line. When the keying device is not connected to the CPU then data entry is said to occur off-line. With off-line data entry, the data are stored temporarily on a tape or a disk and are read into the CPU at some later stage. 

Input devices commonly used in personal computing environments include the mouse and its variants such as the trackball, the trackpad, and the trackpoint. These devices involve manipulating a hand-held device to move the cursor on the computer screen. The devices also allow the user to select objects and perform actions by clicking on buttons either attached to or adjacent to the mouse, trackball, trackpad, and trackpoint devices. A mouse is an opto-mechanical device in which movements of a ball underneath the mouse cause corresponding movements of the cursor on the computer screen. A trackball is simply an inverted mouse in which the user directly manipulates the ball rather than moving the entire mouse to cause movement in the encased ball. The trackpad device presents the user with a small flat panel about three inches square. The user moves his or her finger on the pad to control the movement of the cursor on the screen. Finally, a trackpoint is an eraser-head like device wedged between keys near the center of the keyboard. The user presses the trackpoint in the direction the cursor should be moved on the screen. On a mouse, the buttons to be clicked by the user are placed on the top of the mouse itself. For the trackball, trackpad, and trackpoint, the buttons are typically placed below and to the left and right of either the ball, the pad, or the trackpoint. Today's graphical user interface (GUI) operating systems almost require the use of a mouse or similar device. 

Light pens and touch screen devices are also used for input in certain applications. A light pen is a small, photosensitive device that is connected to a computer. By moving the light pen over a computer screen, the user can in effect manipulate data in the computer system. Touch screen devices, commonly used in airports and large hotels, allow the user to simply touch the computer screen with a finger to make selections. Another technology for input that has recently matured is audio input or voice input. It is now possible to speak to a computer not only to issue commands but also to enter data into the computer system. At the heart of this technology is voice recognition software that is capable of recognizing words spoken clearly into a microphone. Although many strides have been made in voice recognition technology, most systems typically require the user to "train" the software to recognize the user's voice since the same word may sound very different to the computer as a function of differences in pronunciation, tone, and inflection. In addition to audio input, video input is also possible where a video or still camera transmits moving or static images directly into the computer system. It is important to recognize, however, that audio and video data streams take up enormous amounts of storage space. 

Let us now turn to input devices which automate, to varying degree, the task of entering data into a computer system. Bar code scannersoptical character readers (OCR), and magnetic ink character readers (MICR) are all designed to automatically read data. A bar code scanner is a special device designed to read the Universal Product Code symbol (UPC) attached to a product. This UPC is attached to most goods sold today. An OCR device works much like a bar code scanner except that it is designed to read characters that are imprinted in a specific manner. A MICR device is used by banks and other financial institutions to automatically read the magnetically coated characters imprinted at the bottom of checks, deposit slips, and similar documents. A key advantage of these devices is that data entry is fast and virtually error free. Bar code scanners in particular have fostered huge efficiencies in the check out lanes at grocery and department stores. A related input technology is the point of sale device (POS) which reads the bar code of products being sold and instantaneously trigger a series of actions such as updating inventory, reorder levels, and perhaps even a purchase order or a production schedule. Thus, more than simply automating the task of entering data, a POS device goes on to perform related actions based on the automatically entered data. 

Page and hand held scanners are other input devices which can be used to automatically enter text and graphics into a computer system. Scanning photographs or other images into the computer system results in the creation of a graphics file which can then be edited using appropriate software. Scanning text is very useful when combined with OCR software that can convert the images of characters into editable text. Many organizations are using scanners to digitize paper documents received from external sources such as invoices from vendors. It is thus possible, at least in theory, to have an entirely "paperless" office where all data inputs are converted into computer readable form and all information outputs are delivered to users electronically. The following table lists the input devices described above.


Input Devices

On-line keying device

Off-line keying device





Light pen

Touch screen device

Audio input

Bar code scanner

OCR reader

MICR reader


Processor technology

Having discussed alternative input technologies let us now turn our attention to processor technology. At the core of any computer system is the central processing unit (CPU).  The CPU is comprised of a control unit and an arithmetic-logic unit (ALU). As its name suggests, it is the ALU that performs all the calculations and comparisons. In essence, the ALU is the number crunching unit that performs the bulk of the work within a computer system. The control unit, which is synchronized to the computer's internal clock, receives program instructions and coordinates the functioning of the ALU. The speed of operation of the control unit is a function of the speed of the computer's clock unit which oscillates at a frequency of several million cycles per second. For example, the clock on a 100 megahertz (MHz) processor oscillates at a speed of 100 million cycles per second. Thus, the speed of the clock unit is one determinant of the speed of a computer since the operation of the CPU is synchronized to the internal clock. The I/O bus is simply a channel over which information flows between the CPU and peripheral devices like a modem, hard drive or serial port.  The following diagram shows the CPU and its interaction with the memory components within a typical computer system.



As shown in the above diagram, typically only the control unit and the ALU are housed on the processor chip; the memory unit is external to this chip. The memory unit comprises electronic registers that temporarily hold data and instructions both before and after they are processed by the ALU. Each location on the memory unit has a unique address, and the ALU accesses a memory location by activating the address of that location. The memory unit in the CPU is also referred to as primary memory or random access memory (RAM). Today, microcomputers are typically equipped with a minimum of 32 megabytes of RAM, or approximately 32 million bytes of storage. Many PCs come configured with 64 or 128 megabytes of RAM.  High end workstations and mainframe computers house anywhere from 128 megabytes to over 1 gigabyte of RAM. Most present day CPUs contain another type of memory called cache memory. Cache memory, which is relatively small compared to RAM, is used to store data and instructions that are likely to be needed by the ALU. Access to cache memory is about four times faster than accessing RAM. On a well designed processor, the ALU will find what it needs in cache memory 95% of the time. Due to the high cost of cache memory, microcomputers rarely house more than 512 kilobytes of cache memory and typically house only 256 kilobytes of cache memory. Data and instructions stored in both RAM and cache memory are lost when the power supply to the CPU is turned off.

The memory unit and the CPU (control unit and ALU) communicate via a channel. This channel or data path is called the internal bus. There are three manifestations of this internal bus. The data bus sends data and instructions back and forth between the memory unit and the processor unit. The address bus identifies the memory locations that will be accessed during the next processing cycle. Finally, the control bus is used to carry signals from the control unit which direct the operation of the ALU. The width of the internal bus, or the data path, is another factor that determines the speed of the CPU. Older buses were 16 bit, but newer buses are 32 and even 64 bits. Thus, wide data paths and fast clock units contribute to faster CPUs. 

The bus system on Intel based personal computers today typically supports both the Industry Standard Architecture (ISA) and the Peripheral Component Interconnect (PCI) standard. With PCI comes enhanced data throughput, automatic component and adapter configuration and processor independence. PCI also supports "Plug and Play". This feature allows a user to add devices to a computer, like a sound card, without having to physically configure and set-up the card. This is accomplished with a device that supports "Plug and Play" and an operating system like Windows 98 that recognizes "Plug and Play" devices. 

Today, each peripheral device needs its own port, usually gained through one of a few add-in slots available on the PC motherboard. To install all but the most fundamental peripherals – likely a new internal modem, TV Card or SCSI disk drive -- the user must open the case and insert a board. Often, switches must be set, jumper wires configured or the different physical connectors, such as serial or parallel, matched. In contrast, simplicity and ease stand at the center of the Universal Serial Bus (USB). Drawing its intelligence from the host PC, USB ports automatically detect when devices are added or removed, which, unlike with conventional add-in slots, can be done with the power on and without having to re-boot the system. Moreover, offering true plug-and-play operation, the Universal Serial Bus automatically determines what host resources, including driver software and bus bandwidth, each peripheral needs and makes those resources available without user intervention. Lastly, the Universal Serial Bus specification defines a standardized connector and socket, which all peripherals can use, thus eliminating the existing mixture of connector types. With the new bus, only one peripheral device, say the keyboard, needs to plug directly into the PC. The other devices simply connect into either an expansion hub built into the keyboard or monitor or into a stand-alone USB box. Typical devices that will connect to the Universal Serial Bus include telephones or telephone network, modem, printer, microphone, digital speakers, writing stylus, joystick, mouse, scanner and digital camera. 

Another technology that facilitates devices on the PC is the FireWire Technology (IEEE 1394). This technology is most predominantly found on the MacIntosh, although some use is finding its way onto Intel based PC’s. FireWire is like the Universal Serial Bus. While the USB is great for lower-speed multimedia peripherals, FireWire is aimed at higher speed multimedia peripherals such as video camcorders, music synthesizers and hard disks. FireWire is likely the future of computer I/O technology. Together, FireWire and USB radically simplify I/O connections for the user. The age of SCSI, dedicated serial, modem ports and analog video is fast coming to a close.

Let us now relate the above discussion of processor technology specifically to microcomputers. In 1982, Intel provided a processor for the first serious personal computer, the IBM Personal Computer (PC).  This machine used the Intel 8088 microprocessor, running at a lighting fast six megahertz.  In the intervening years the PC and PC Compatibles used Intel 80286, 80386 and 80486 processors.  Each of these microprocessors brought more speed and capabilities. 

Intel’s successor to the 80486 was the Pentium processor.  This widely-used microprocessor was first offered in 1993. The Pentium processor was a technological marvel when it was first introduced -- it had over 3 million transistors integrated on a chip about 2 square inches! The bus size of the Pentium is 64 bit in contrast to the 32 bit bus of the older Intel 486 processor. It has a built-in math coprocessor, also called the floating point unit (FPU). This FPU is dedicated to the task of performing all of the arithmetic calculations, thus freeing the ALU unit to perform other tasks such as logical comparisons and execution of other instructions such as fetching and delivering data. Through its superscalar architecture, the Pentium processor could perform two operations in one clock cycle. In terms of the clock speeds, the fastest Pentium processor operated at 233 MHz. The Pentium processor had 16 kilobytes of Level 1 cache memory (i.e., on the chip) -- 8 K of the cache is dedicated for data and 8 K for instructions. Note that Level 1 (L1), which is integrated into the microprocessor chip, and Level 2 (L2) cache, which is usually on a separate chip, will be discussed in the next subsection. 

The Pentium Pro processor, which is now obsolete, had a clock speed 200 mhz. In order to substantially enhance the Pentium Pro's ability to handle multimedia applications, Internet applications and enhanced storage devices like DVD, Intel developed its MMX (MultiMedia eXtension). This technology is an extension to the Intel Architecture (IA) instruction set. The technology uses a single instruction, multiple data technique to speedup multimedia and communications software by processing multiple data elements in parallel. 

The Pentium II processor was released in early May 1997. The Pentium II can be thought of as a Pentium Pro with MMX support. The Pentium II initially came in versions with clock speeds of 233, 266, 300, and 333 MHz. All of these Pentium processors came with a 66 MHz internal bus. In April of 1998, a new generation of Pentium II processors, code named Deschutes, was released. These Pentium II processors are now available in speeds of 350, 400, and 450 MHz and are based upon the .25 micron manufacturing process. This innovative process makes it possible for these CPUs to include over 7.5 million transistors resulting in more power in less space. The 450 mhz Pentium II processor was released in the fourth quarter of 1998. Aside from the higher CPU speeds, the most significant change in PCs based on these new processors is a shift from a 66 MHz system bus to a 100 MHz system bus. This is the first time Intel has increased bus speed since the debut of the Pentium processor. The major benefit of the 100 MHz bus in Pentium II PCs is that it provides 50 percent more bandwidth to memory. Thus, the peripheral devices, like the hard drive, will be able to communicate faster with the programs running in RAM. 

The latest variant of the Pentium II processor is the Pentium II Xeon processor, which is intended specifically for servers.  The Xeon family of processors have large, fast caches (1 and 2 MB L2 caches) and include a number of other architectural  features to enhance the performance of servers.  Also at the time of announcing the 350 and 400 MHz Pentium II processor technology, Intel announced the availability of 233 and 266 MHz Pentium II mobile technology for laptop computers. This is the first time in Intel's history that laptop and desktop technologies were at the same relative level. Previously, technology for laptops lagged that of the technology found in desktop machines. This also suggests that for the first time, laptops can begin to rival desktops in terms of processor speed and also in terms of other capabilities (the availability of larger hard drives, zip drives and DVD drives).

Intended mainly for low cost PCs, Intel released the Celeron processor in June, 1998.  In its initial version, the Celeron processor lacked an L1 cache. Intel had hoped that the Celeron processor would stave off competition from low cost chip producers like AMD. However, given the low market acceptance of the initial release of the Celeron processor, Intel added a 128k L1 cache to the Celeron processor. This processor is still offered by Intel, in speeds ranging from 500 to 700 mhz.  A low-power version of this processor, intended for portable computers, is also offered in speeds as high as 650 mhz. A mobile version of the Celeron processor is offered by Intel and is used in some laptop applications.

In early 1999 Intel released the Pentium III microprocessor. The Pentium III is faster than the Pentium II, especially for applications written to take advantage of the new set of instructions encoded into the Pentium III for multimedia applications (code-named "Katmai" instructions).  These 70 new computer instructions make it possible to run 3-D, imaging, streaming video, speech recognition, and audio applications more quickly.  The Pentium III is currently offered in clock speeds ranging from 650 mhz to a blazing 1 gigahertz!

The latest variant of the Pentium processor, the Pentium 4, has recently been released by Intel. Available in speeds of 1.3, 1.4, or 1.5 gigahertz, the Pentium 4 processor features a speedy 400 mhz system bus. This processor is designed to deliver enhanced performance for applications such as Internet audio and streaming video, image processing, video content creation, speech, 3D, CAD, games, multi-media, and multi-tasking user environments. Thus, the processor is targeted towards “power users” and PC gaming enthusiasts, rather than for general purpose uses (for which the Pentium III processor offers more than adequate performance).

Mobile variations of the Pentium III processor now exist and run at speeds up to 850 mhz. Prior to the appearance of this technology, owing a laptop typically meant that your machine was not quite as fast as desktop. With the mobile Pentium III processor, this is no longer true.

Intel is not the only game in town.  AMD provides Pentium compliant processors. AMD released its K6 class of processors in early April 1997. The K6 chip, which initially came in 166 MHz, 200 MHz, 233 MHz and 300 MHz flavors, included MMX extensions. An updated version of the K6, referred to as the AMD K6-2 processor with "3D-Now" technology, was offered in 1998 running at 300 and 333 mhz speeds but now runs at a top speed of 475 mhz.  As the "3D-Now" label suggests, these processors are optimized to render vivid three-dimensional graphics, ideal for games and high-end graphics applications.  AMD is offering the K6-III processor, intended to compete with Intel's Pentium III processor. This processor comes with 320 KB of internal cache--including a 256 KB L2 cache in addition to the 64 KB L1 cache. A unique feature of the K6-III is that its support for a "trilevel cache" which means that an external Level 3 cache can also be added to enhance performance. There is a mobile version of this processor as well.  The high end processor from AMD is the Athlon. Like the fastest Pentium III processor, this processor also has a top speed of 1 gigahertz.  Recently, AMD released a new line of processors less expensive than the Athlon, called Duron, which is designed to compete against Intel's Celeron line of processors that are aimed at value conscious customers.

In contrast to the Pentium and Pentium Pro processors, Motorola's PowerPC 750 processor running from 200 to 400 mhz is a reduced instruction set or RISC processor. RISC implies that the processor recognizes only a limited number of assembly language instructions, but is optimized to handle those instructions. The Pentium processor is considered a complex instruction set processor (CISC) which means that it is capable of recognizing and executing a wider variety of instructions. As a result of being optimized to handle fewer instructions, a RISC processor can outperform a conventional CISC processor when running software that has been appropriately designed to take advantage of the RISC processor. Tests have shown that a RISC processor can be as much as 70% faster than a CISC processor. Like the Pentium, the PowerPC also has on-chip cache, but the size of the cache is 32K rather than 16K. The PowerPC was also designed using superscalar architecture, but can perform three operations in one clock cycle (as opposed to the Pentium's two instructions per cycle). The following table summarizes the above discussion on microcomputer processor technology.


Microprocessor Comparison



AMD Athlon 

This processor is a superpipelined, superscalar x86 processor designed for high clock frequencies.  It uses AMD's Enhanced 3DNow!™ technology for leading-edge 3D performance, including 5 new DSP instructions to improve soft modem, soft ADSL, Dolby Digital surround sound, and MP3 applications. 

Pentium II Xeon

1 MB or 2 MB of L2 cache, optimized for servers.

Pentium III

64 bit bus, 32K L1 cache, dedicated 512K L2 cache, 100 and 133 mhz bus, dual independent bus architecture, dynamic execution, optimized for 32 bit applications, 70 "Katmai" instructions to enhance multimedia applications.

Pentium 4

400 mhz system bus, rapid execution engine pushes the processor's Arithmetic Logic Units to twice the core frequency resulting in higher execution throughput and reduced latency of execution, SIMD Extensions 2 (SSE2) extends MMX™ and SSE technology, and adds 144 new instructions.

Motorola PowerPC

64 bit bus, 32K on-chip cache, RISC processor, and three (3) operations in one clock cycle.


Storage technology

Temporary storage

In our discussion of processor technology we have already discussed temporary storage of data and instructions within the CPU. The memory unit, or random access memory (RAM), is the main location for temporary storage of data and instructions. Many of today's most complex software programs require large amounts of RAM to operate. Cache memory is another type of temporary internal storage of data and instructions. A third type of memory is read-only memory (ROM) which, as the name suggests, cannot be altered by the user.

For an application software program to run, it must first be loaded into the computer's RAM. The main RAM unit is sometimes referred to as dynamic RAM or DRAM to distinguish it from static RAM or SRAM which refers to the computer's cache memory (to be discussed a little later). As indicated above, many software programs require a minimum amount of RAM in order to successfully load and run. Program instructions are loaded into primary memory, RAM, from secondary memory storage device, typically a magnetic disk drive (referred to as a hard drive) or a floppy disk drive. As needed, data requiring processing are also loaded into RAM. These data and instructions can be transferred to and processed by the computer's arithmetic-logic unit (ALU) very quickly, as directed by the control unit. Access times to RAM are expressed in terms of nanoseconds (or billionths of a second). Access times to RAM can range from 60 to 80 nanoseconds (a nanosecond, ns, is a billionth of a second -- lower the ns number the faster the access time). Eventually, data are written back from RAM to the secondary storage device - either a hard drive or a floppy drive. The size of RAM dictates the number of applications software and/or programs that can be run simultaneously. The larger the RAM, the greater the number of programs that can be run concurrently. Applications also run faster when the size of RAM is large because data and instructions needed for processing are more likely to be found in RAM, which can be accessed very quickly, than on the secondary storage device, to which access is considerably slower. 

As recently as two years ago, most microcomputers were equipped with asynchronous DRAM. In asynchronous mode, the CPU sends a request to RAM which then fulfills the request. These two steps occur in one clock cycle. Synchronous DRAM (SDRAM) which is more expensive than asynchronous DRAM is now commonly available for PCs. Synchronous DRAM stores the requested data in a register and can receive the next data address request while the CPU is reading data from the previous request. The CPU and RAM can therefore be synchronized to the same clock speed; hence the term "synchronous" DRAM. Systems equipped with SDRAM can significantly outperform systems with conventional DRAM. An advanced type of memory, usually used only for servers because of the high cost, is Error Checking and Correcting (ECC)  memory.  This type of memory that can find and automatically correct certain types of memory errors thereby providing greater data integrity.  By contrast, non-ECC memory would result in a system crash when a memory error is encountered. RDRAM, short for Rambus DRAM, is a type of memory (DRAM) developed by Rambus, Inc. Whereas the fastest current memory technologies used by PCs (SDRAM) can deliver data at a maximum speed of about 100 MHz, RDRAM transfers data at up to 600 MHz.  RDRAM is touted by some as the preferred replacement for SDRAM.  However, RDRAM remains very expensive and is therefore found mainly in high-end workstations.

Cache memory can significantly improve system performance. Cache memory, also referred to as static RAM (SRAM), is an area of very high speed memory which stores data and instructions that are likely to be needed. When the ALU needs data and/or instructions, it first accesses cache memory and accesses RAM only if the needed data or instructions were not found in cache memory. However, more often than not the ALU will find the needed data and instructions in cache memory. Why does cache memory speed up processing? Whereas dynamic RAM (DRAM) is typically accessed at the rate of 60 to 80 ns, cache memory -- static RAM (SRAM) -- can be accessed at under 10 ns. Thus, access times to cache memory are six to seven times faster than that for RAM. Most processors in today's computers include a certain amount of cache memory built into the chip itself. As discussed earlier, the Intel Pentium processor comes with 16K of cache memory built into the chip, 8K of which is used for data and 8K for instructions. Cache memory that is integrated onto the chip itself is referred to as Level 1 (or simply L1) cache. 

Other than cache memory built into the chip, the system board can also house external cache memory (i.e., external to the processor chip) on a separate chip. While this external cache, referred to as Level 2 (L2) cache, is somewhat slower than the cache built into the chip, it speeds up processing nevertheless. L2 cache can be either asynchronous or synchronous. In an asynchronous cache design, the CPU sends an address request to the cache which looks it up and returns the result. All three of these steps occur in one clock cycle. Asynchronous cache is adequate for computers with clock speeds under 100 MHz . But at speeds of 100 MHz and above, the three steps simply cannot be performed in one clock cycle. The solution is synchronous cache, a variation of which is called pipeline burst cache. In these designs, the address request, access, and return steps are spread over more than one clock cycle. In this manner, cache accesses can occur while the CPU is reading data from the previous access thereby speeding up the process. 

Instructions that direct the computer's operations when power is turned on are stored in ROM. These instructions involve checking the memory registers, determining which devices are connected to the computer, and loading the operating system into RAM. Unlike RAM and cache memory, the contents of ROM are not lost when power is turned off (a small long-life battery provides sufficient power to retain ROM instructions). In older computers, ROM instructions were stored in a chip housed on the system board and could be upgraded only by replacing the ROM chip. In newer computers, ROM instructions are stored in a special type of memory located on the system board, referred to as "flash" memory. The ROM instructions located in flash memory can be easily upgraded via a diskette. The various types of memory are summarized in the table on the following table.



Memory Types



Cache memory

High speed memory used to store data and instructions that are likely to be required in the next cycle.  Cache memory represents the speediest type of memory.


Random access memory; used to temporarily store data and instructions to run applications and the operating system. 


Read only memory; used to permanently store instructions required upon boot up. "Flash" ROM instructions facilitate easy upgrades. 


Synchronous RAM; is speedier than asynchronous RAM because CPU does not have to wait for the next instruction. 


The likely next replacement for SDRAM.  Currently much more expensive than SDRAM.


Error checking and correcting memory.  A very expensive type of memory used mainly for servers.



Permanent Storage

Let us now turn to a discussion of permanent storage of data. The three primary media for permanent storage of data are magnetic tapes, magnetic disks, and optical disks (also referred to as compact digital disks, or CD-ROM). Magnetic tape is a low cost sequential storage medium. While the low cost is an advantage, the major drawback of tape is that data must be accessed in sequence. Thus, to access a record in the tenth block on a tape, the first nine blocks must be traversed - the tenth block cannot be directly accessed. Although magnetic tapes were used extensively in the early days of computing, the dramatic drop in the cost of magnetic disks has relegated tape to be used primarily for backup purposes. Most computer systems use tape drives for periodic backup of data. In case of system or magnetic disk failure, data can be restored from the backup tape. On mainframe computer systems tapes are stored in the form of reels, but on microcomputers tapes are housed within cartridges and are thus more compact and durable. 

Magnetic disks, also referred to as hard disks, are more expensive than magnetic tape but have the advantage of random or direct access. A record in the tenth block can be directly accessed without accessing or traversing the first nine blocks. Access times for magnetic disks are expressed in thousandths of a second (milliseconds, or ms). Current magnetic disks support access times under 12 ms, with some as low as 7 ms. Records stored on magnetic disks are overwritten when they need to be updated. Magnetic disk drives are sealed units with one or more disk surfaces. Each surface has a number of concentric circles or "tracks." Each track in turn is divided into a number of sectors which is where the data are stored. Thus, a record's address would comprise the disk surface, the track number, and the sector number at which it is located. Hard disks for microcomputer applications rotate at a high speed, anywhere from 5,400 revolutions per minute (rpm), or 7,200 rpm, all the way up to 10,000 rpm. The capacity of magnetic disk drives varies, but 10 gigabytes is considered a bare minimum. The attached picture shows the inside of a disk drive. In this picture, the top of case has been removed to show the disk platters. 

There are two primary types of interfaces to magnetic disks in microcomputers. The first and cheaper type is Ultra ATA.  "ATA" stands for Advanced Technology Attachment and is synonymous with extended integrated drive electronics (EIDE). As the name suggests, the circuits that control the drive are integrated onto the drive and the connector simply provides the channel between the drive and the system board. Disk drives with the Ultra ATA interface are capable of a maximum data throughput of 33 megabits per second (or mbps).  However, the average data throughput of Ultra ATA drives is still under 20 mbps.    Ultra ATA drives can have a capacity as high as 80 GB, and drive capacities above 20 GB are quite common on present day PCs. 

The second and more expensive type of interface is the small computer system interface (SCSI - pronounced "scuzzy") which typically requires a separate controller card. The latest incarnation of the SCSI interface is Ultra2 SCSI, which is capable of a burst data throughput of 80 mbps.  Also commonly available is Ultra SCSI which offers a sustained data throughput of 40 mbps.  Since Ultra2 and Ultra SCSI drives have a much higher data throughput, they are often chosen for servers.  Regarding disk access times, however, it should be noted that Ultra ATA and Ultra SCSI drives have similar sub 10 ms access times since both interface types can have drives spinning at 7,200 revolutions per minute (rpm). Like Ultra ATA drives, SCSI drives also have large capacities. The largest capacity SCSI drive available today is 50 GB.  By way of a cost comparison, a good quality 18 GB Ultra SCSI drive cost around $190 whereas a good quality 20 GB Ultra ATA drive cost under $100 as of January 2001. 

Whereas magnetic disks are usually permanently affixed within a computer system, floppy disks are transportable and thereby permit data to be copied and moved between computer systems. Floppy disks have one magnetic disk shielded in a hard plastic case, a read-write opening behind a metal shutter, and a write-protect notch that can be used to make the diskette "read only" preventing both accidental erasure of files on the diskette and writing of data onto the diskette. The capacity of floppy diskettes on microcomputers is 1.44 megabytes, which can prove to be very limiting considering that file sizes greater than 1.44 megabytes are very frequently encountered in present day systems. Access times to data on floppy diskettes are considerably slower when compared to accessing the same data on hard drives. Substantial reading from and writing to a floppy diskette can severely detract from the performance of a data processing system.

The "Zip" drive from Iomega is fast becoming the industry standard replacement for the 1.44 diskette. Many of the major personal computer vendors like Dell, Gateway, and Compaq, are offering the ZIP drive as one of the default diskette drives or as an option. These drives exist in either 100MB or 250MB versions and are USB devices. The 250MB drive is downward compatible with the 100MB disk. The size of Zip disks is about the same as a 3.5" diskette. These drives also support the FireWire technology as an option. The street price of the 250MB drive is approximately is $179.

A competitor to the Iomega Zip drive is the Imation LS-120 (also referred to as the SuperDisk). It is also an alternative to the 3.5" floppy disk drive and holds 120 MB of data on a single disk. Unlike the Zip drive, the LS-120 is fully compatible with conventional 3.5" floppy diskettes -- it can read from and write to 3.5" floppy diskettes. For even larger data storage needs, Iomega offers the Jaz drive with a capacity of 2 gigabytes.  Although more expensive than the Zip drive, Jaz drives have access times in the 10-12 millisecond range (comparable to that of regular hard drives). These drives support SCSI and FireWire technology.

A technology that has become very popular on personal computers in recent years is the compact digital disk, also called optical disk and CD-ROM (for "compact disk - read only memory"). One compact disk (CD) can store approximately 650 megabytes of data. Given this large capacity, today's multimedia applications employing audio and video clips, which are extremely data intensive, are being almost exclusively distributed on CDs. The "read-only" nature of CDs indicates that conventional CDs cannot be written on and therefore cannot be used and re-used to store data. However, a variant of conventional CDs, called recordable CDs or "CD-R" has recently been developed. CD-R drives, which cost approximately $250, can not only create multimedia CDs, but they can also write compressed data files on a CD. Thus, a CD-R drive can also be used as a backup device, with each CD holding about 1.3 gigabytes of data in compressed form. A CD-R disk costs approximately $1.  A variation of CD-R technology is CD-RW (for "rewritable") which, as the name suggests, can rewrite data onto special CD-RW disks.  These CD-RW disk drives, as well as the disks, are slightly more expensive than CD-R drives and disks.

A single-speed CD-ROM drive can transfer data at the rate of about 150 kilobytes per second. A 12X speed CD-ROM drives can transfer data at roughly twelve times the rate of single speed drives (1800 kilobytes per second). Today, variable speed (12/24X, 13/32X, or 17/48X) CD-ROM drives are commonly available, with the fastest CD-ROM drive spinning at 72X. Note that a "17/48X" CD-ROM drives spins at a minimum of 17 times faster than a single-speed drive and a maximum of 48 times faster.  Access times to CD-ROM disks are considerably higher (i.e., slower access) than to magnetic disks (hard disks). The 13/32X variable speed CD-ROM drive would have an access time of about 75 ms and a transfer rate of 5 Megabits/second. Many magnetic disk drives have access times below 10 ms and transfer rates of 33.3 mbps. Due to their large capacities, most software manufacturers distribute their products on CDs. Users find it more convenient to install new software simply by inserting one CD rather than switching multiple floppy diskettes in and out of the floppy disk drive. Programs that deliver a sizable amount of sound and graphics also benefit from the high speed CD ROM drives. Permanent storage options are summarized in the table below. 

In the same way that CDs supplanted vinyl LP's, a new technology, DVD, will replace CD-ROM. DVD has been termed digital video disk or digital versatile disk. High capacity storage for the personal computer is on the verge of a major product shift. This technology provides for high capacity, interoperability and backward compatibility. DVD-ROM drives are backward compatible with CD-ROMs. With 4.7GBs per DVD disk (equivalent to 7 CD-ROMs or over 3,000 floppy diskettes), a typical DVD-ROM drive transfers DVD-ROM data at up to 13,500 KB/sec (10X) and CD-ROM data at up to 6,000 KB/sec (40X). Access times are roughly 110 ms (DVD) and 80 ms (CD). The technology used for the personal computer and for the home electronics market is the same. For example, movies on DVD disk will play on both your television and on your PC. This approach will let you use a DVD drive without losing your investment in CD-ROMs. DVD will eventually make CD-ROMs and laser disks obsolete. Viewing a DVD movie on your personal computer can be a fun experience. If your laptop computer has a DVD drive, this can be a particularly nice way to pass the time on a long plane flight. The cost of these drives for personal computers will typically be found in the $150 to $300 range.



Storage Options



Magnetic tape

Slow sequential access. Used primarily for backup purposes. 

Magnetic disk

Fast access times (under 10 milliseconds). Capacities up to 16.8 GB for Ultra ATA drives, and up to 47 GB with Ultra SCSI controllers. 

Floppy disk

3.5" disk can store 1.44 MB of data. Slow access and limited storage capacity. 

Optical disk (CD-ROM)

Used to distribute software and for multimedia applications. Can store 650 MB of data. Read only device. 


Recordable CDs. Drives and disks are significantly more expensive than CD-ROM drives. 


Compact disk-eraseable. Users may overwrite files on these CD's. CD-RW disks are backward compatible with standard CD-ROM Drives. 


A major product shift. Substantial storage capacity at 17 GB per disk. 

Iomega Zip drive

Alternative to the 3.5" 1.44 MB drive. Can store 100 MB of data. The most recent version can store 250 MB.  External (Portable) or Internal variants. 

Imation LS-120 drive

Alternative to the 3.5" 1.44 MB drive. Can store 120 MB of data. Backward compatible with 3.5" diskettes. 

Iomega Jaz drive

Removable disks can store 2 GB of data.  More expensive than Zip drives. Access times comparable to that of hard drives.


Output technology

The two broad categories of output technology are hard copy output and soft copy output. As the name suggests, hard copy output involves printing out the desired output on paper. There are a number of options available for obtaining hard copy output, which we will discuss below. Soft copy output involves displaying the output on the user's computer screen (also called the "video display terminal"). A number of characteristics determine the quality of the soft copy output. These will also be discussed later.

Hard copy output options

Printers can be broadly classified into two categories: impact printers and non-impact printers. Dot matrix printers are impact printers and generate output by forming characters from a matrix of pins which then strike an inked ribbon. Although dot matrix printers are slow and noisy, and are only slightly cheaper than ink jet and low end laser printers, they are still in use because of one significant advantage over ink jet and laser printers - dot matrix printers can generate multiple copies simultaneously. This feature is particularly useful for printing out invoices, receipts, orders, and similar documents when multiple copies are almost always required. The speed of printing of dot matrix printers is measured in terms of the number of characters per second (cps) that are printed.

Ink jet printers are one type of non-impact printers. An ink jet printer generates output by shooting a microscopic jet of ink from the print head onto the paper. The ink is of special quality and dries almost instantly. Although the quality of ink jet printing is very good, the printed images will appear somewhat smudged when regular copier/printer paper is used. Special high gloss paper, which is more expensive, results in better quality output. Ink jet printers available today provide inexpensive color printing . While some low cost color ink jet printers require the user to change the ink cartridge from black to color, other more expensive ones can automatically switch between printing in color and printing black only using a single ink cartridge. Like dot matrix printers, ink jet printers also print a character at a time. Print resolutions of ink jet printers are expressed in terms of dots per inch (dpi). Expect resolutions of 600 to 1200 dpi even for inexpensive printers. The speed of mid-range ink jet printer is roughly nine pages a minute in black and six pages per minute in color. 

laser printer uses laser beams to create an image of the output on a photosensitive drum. The drum, which contains toner ink, then rolls against a sheet of paper to transfer the image onto the paper. Laser printers thus print an entire page at one time. The print resolution of laser printers is also expressed in terms of dpi. Three hundred dpi is the minimum resolution of laser printers, while 600 dpi is common even in relatively low cost laser printers. High end laser printers, which cost in excess of $1,000, can generate output at 1,200 dpi. In terms of speed, laser printers print at a minimum of four pages a minute, while speeds of 8, 12, 17, and 22 pages per minute are not uncommon for business laser printers.  A recent trend in laser printers is the falling cost of color laser printers.  Previously costing over $5,000, good quality color laser printers can now be purchased for as little as $1,500.

Soft copy output

The quality of soft copy output, i.e., screen or video display, is a function of the video card and the monitor.  Let us examine each of these issues.


Video card: In a microcomputer, the processing tasks related to video display are usually handled either by a dedicated video card that fits into a slot on the system board or by a special chip integrated onto the system board.  The latest interface for the video card is the accelerated graphics port (AGP).  Prior to the development of the AGP, the video card interface used the peripheral component interconnect (PCI) bus.  AGP cards are up to four times faster than cards using the PCI bus -- they offer up to 533 mbps in contrast to 133 mbps on the PCI bus.  The amount of memory in the video card is another characteristic that determines the speed and quality of video display. Two megabytes of RAM for video is considered a bare minimum, with four and even eight megabytes increasingly becoming the norm. Memory reserved for video display determines the number of colors that can be displayed on the screen at a particular screen resolution; the more the memory, the more the number of colors that can be displayed. 

Monitor: The resolution of a microcomputer's monitor is expressed in terms of the number of columns by the number of rows that are displayed. Standard VGA (video graphics array) resolution displays 640 columns by 480 rows. Super VGA resolution is 800 x 600, while extended VGA is 1024 x 768. Even higher resolutions of 1280 x 1024 resolutions are available on certain monitors. Note, however, that although higher resolutions translate to crisper images, the size of the characters being displayed shrinks proportionately. Thus, the higher resolutions almost require monitors substantially larger than the standard 14" or 15" monitors. The highest resolution recommended for a 15" monitor is Super VGA (800 x 600).  Seventeen inch monitors are more expensive, but much easier on the eye if resolutions higher than Super VGA are to be used continually. Seventeen and even 19 inch monitors are becoming default options on personal computer systems sold today. For certain computer aided design and graphics applications, a 21" monitor is very useful. 

The monitor's refresh rate -- the number of times per second that the screen is repainted or refreshed is expressed in terms of hertz (hz) -- cycles per second. The higher the refresh rate at a certain resolution the more likely the display will be flicker free. In terms of the size of each dot or pixel on the monitor, the smaller the dot pitch the crisper will be the characters displayed on the monitor. Good quality monitors have dot pitches of .28 mm, .26 mm, or less. Newer monitors are also typically rated as being "energy star compliant" which means that they consume less power and can automatically shut off after a certain period of inactivity. Energy star compliant monitors also typically emit less radiation--a critical consideration for users likely to be in front of a computer monitor for a considerable portion of the work day.  The latest advance in PC displays is the flat-panel TFT (thin-film transistor) displays.  These displays, typically found on notebook computers, offer a space saving alternative to conventional monitors while still offering exceptional display quality.  However, flat-panel displays are still considerably more expensive than traditional monitors. 

Apart from hard and soft copy output, the sound card present on most microcomputers offers an output option.  For example a microcomputer with a sound card and CD or DVD drive can play an audio CD.  An electronic piano keyboard can interface with a computer using a MIDI (musical instrument digital interface) port on the sound card.  Thus, with support software and a MIDI port, an electronic piano keyboard can be used as an input device to a microcomputer.  Musical selections previously input can be played back as outputs.


Having discussed a considerable number of hardware terms and concepts, let us now turn to a discussion of computer software. The most basic definition of software is that it comprises instructions that the hardware can execute. The two broad categories of software are systems software and applications software. Systems software consists of the operating system and other utility programs that allow application programs to interact with computer hardware. Applications software consists of programs written to process the user's data and convert it into information.

The relationship between applications software and systems software is easily understood in the context of an application designed to convert the user's data into meaningful information. Let us assume that a user has designed an application program to process payroll time tickets resulting in the printing of employee paychecks. The time tickets represent data that needs to be processed. The application program sends the data and the program instructions detailing how the data is to be processed to the operating system. The operating system in turn directs the hardware devices (i.e., the central processing unit) to perform the functions necessary to process the data and return the results to the user (i.e., display the results on the computer screen or output to the printer).

Systems software

The various types of systems software include the operating system, utility programs, and language translators. The operating system manages and directs the functioning of all CPU and peripheral components. Allocating resources such as memory and processor time for tasks is one of the primary functions of the operating system. Tasks such as writing data from primary memory to secondary storage devices such as disk and tape drives are also handled by the operating system. As needed by application programs, the operating system allocates memory and processor time for the specific tasks that need to be performed in execution of the user's application program.

Three capabilities of operating system are noteworthy: (1) multitasking, (2) multiprogramming, and (3) multiprocessing. Most present day operating systems such as Unix and OpenVMS for mainframe computers, and Windows 95/98 and the Macintosh System 8 for personal computers, are capable of multitasking. Multitasking is the ability of the operating system to concurrently handle the processing needs of multiple tasks. Thus, the user can perform word processing functions at the same time that the spreadsheet program prints a large file. Both personal computers and mainframe computers can perform multitasking. Mainframe computers alone are capable of multiprogramming. In a multi-user mainframe computing environment, multiprogramming is the ability to rapidly switch back and forth between different users' jobs. Each user receives a response very quickly, giving the user the impression that the computer is dedicated to that user's job. The immense speed of the mainframe computer allows it to switch between jobs very quickly, but at any one instant the computer is processing only one job. Another related ability of both mainframes and high end personal computers is multiprocessing which is the ability to simultaneously control multiple processors within the same computing system. Whereas typical computers have only one CPU, a multiprocessing computer actually has several CPUs that are linked together. Only very complex scientific and mathematical processing jobs require multiprocessing. Some advanced servers can also benefit from multiple CPUs. 

The two most popular operating systems for personal computers today are Microsoft Windows and the Macintosh Operating System 8 (Mac OS 8). Since its release in August 1995, Microsoft's Windows 95 operating system has been adopted by more than 20 million users. The current version of Windows is Windows 98, which was released in July 1998. The new integrated Internet user interface in Windows 98 allows users not only the simplicity of surfing the Web, but also the ability to find and access information quickly on their local network or intranet. Windows 98 enables users to take advantage of innovative new hardware designs such as the Universal Serial Bus.  The next version of the consumer-oriented Windows operating system, which is being called Windows "Millenium Edition," will be released later in the year 2000.  For corporate users, Microsoft released Windows 2000 in February 2000.  Windows 2000 is a full 32 bit operating system and is a much more stable operating system than Windows 98. It is intended primarily for the networked environments commonplace in most businesses today. Windows 2000 sports a number of advances over the previous version (called Windows NT 4.0), especially in the security arena.  It is offered in two flavors -- Windows 2000 server (the upgrade to NT 4.0 server) and Windows 2000 professional (the upgrade to NT 4.0 workstation).  You are encouraged to read more about these new operating systems by clicking on the above hyperlinks to the related areas on Microsoft's web site. 

For its part, Apple has released its MAC OS 8.1, the first major upgrade to its operating system since 1984. Apple currently offers version 9.0. This OS is a critical part of Apple's drive to recapture market share. IBM's OS/2 Warp 4 operating systems have earned critical acclaim in the computing industry but little acceptance in the marketplace. All of these personal computer operating systems have a "graphical user interface" or GUI. These operating systems allow most functions to be performed by pointing and clicking using devices such as a mouse or a trackball. Programs, files, and peripheral devices such as printers and disk drives are all represented by icons on the screen. 

Linux has recently been receiving significant interest in the market place, notably as a competitor to Windows. Linux (often pronounced lynn-ucks ) is a UNIX-like operating system that was designed to provide personal computer users a free or very low-cost operating system comparable to traditional and usually more expensive UNIX systems. Linux has a reputation as a very efficient and fast-performing system. Linux's kernel (the central part of the operating system) was developed by Linus Torvalds at the University of Helsinki in Finland. To complete the operating system, Torvalds and other team members made use of system components developed by members of the Free Software Foundation for the GNU project. 

Linux is a remarkably complete operating system, including a graphical user interface, X Window System, TCP/IP, the Emacs editor, and other components usually found in a comprehensive UNIX system. Although copyrights are held by various creators of Linux's components, Linux is distributed using the Free Software Foundation's copyleft stipulations that mean any copy is in turn freely available to others. Red Hat and VA Linux are two popular vendors offering distributions of the Linux operating system. Dell Computer Corporation offers Linux as a preloaded option on some of its computers.  Linux is sometimes suggested as a possible publicly-developed alternative to the desktop predominance of Microsoft Windows. Although Linux is popular among users already familiar with UNIX, it remains far behind Windows in numbers of users. 

Utility programs are the second category of systems software. Mini-programs for performing commonly used functions like formatting disks, compressing files, scanning for viruses, and optimizing the hard disk are some examples of utility programs. In essence, utility programs complement the operating system by providing functions that are not already built into the operating system. Third party vendors typically provide suites of utility programs that are extend the functionality of the operating system. 

The third category of systems software is language translators. Assemblers, interpreters, and compilers are the three types of language translators. As the term implies, a language translator takes a program written by the user, which is called the source code, and converts the source code into machine language which is called the object code. The source code program is written in English using a text editor or a word processor capable of creating an ASCII (text) file. The computer's hardware can only understand machine language commands (object code) which are in binary code consisting of 0s and 1s. 

Interpreters convert source code into object code one line at a time. Some versions of the BASIC (Beginner's All-purpose Symbolic Instruction Code) programming language used an interpreter for execution. The interpreter must be invoked each time the program is to be run. An assembler is used to convert an assembly language program, rarely used these days, to machine language. Assembly language, referred to as a "second generation" programming language (machine language is considered to be the "first generation" programming language). Compilers are used to convert the source code of "third generation" programs such as COBOL (Common Business Oriented Language), Pascal, C and C++ into object code. Unlike interpreters, compilers process the entire source code file and create an object code or executable file if the program is successfully compiled. Interpreters, assemblers, and compilers check the source code program for syntax errors (logic errors can be detected only by running test data and comparing the actual results to expected results). An interpreter indicates the syntax error and simply does not execute the line of code. A compiler generates a listing file highlighting each line of code with syntax errors. A successful compilation will generate an object file. The object file is then linked to other needed object libraries and the output of this process is an executable file. Debuggers are useful utility programs that allow programmers to process a program one step at a time while examining how variables change values during execution. Debuggers thus assist in the detection of logic errors. Once a program is successfully compiled and an executable file is created, the user can run the program simply by executing the resulting executable file (.exe file); the source code file is not required to run the program. In fact, in most applications it will be appropriate to distribute only the executable file to users without providing them with the source code.

Programming languages

In the above discussion of language translators we have already discussed first, second, and third generation programming languages. To repeat, machine language programming using 0s and 1s is the first generation programming language. Assembly language using cryptic symbols comprises the second generation programming language, in which the assembly language program needed to be "assembled" or converted to machine language. Third generation languages use plain English syntax to create the source code program that must then be compiled to create an object program or executable file. COBOL, Pascal, Visual Basic, and C are examples of third generation languages.

Fourth generation languages, referred to as 4GLs, are even more high level than third generation languages and use a very English-like syntax. Third generation languages are procedural languages in that the user must specify exactly how data is to be accessed and processed in order to generate desired output. In contrast, 4GLs are non-procedural, meaning that the user simply specifies what is desired (i.e., procedural details regarding how the data should be processed need not be provided). FOCUS and SQL (structured query language) are two examples of 4GLs. SQL (pronounced "sequel") is a very popular 4GL and is fast becoming the standard language for interacting with relational database systems. 

Both third and fourth generation languages adopt the perspective that data are separate from processes. Data are stored in repositories and programs specify the processing steps that modify data. A radically different viewpoint is adopted by object-oriented programming languages (OOPL). Rather than focusing on data versus processes, an OOPL simply focuses on the objects of interest in a particular domain. For example, in a sales order processing application the objects of interest would be customers, inventory, and orders. For each object, an OOPL defines attributes that need to be stored and also the processing methods that would be used to modify those attributes. For example, a "customer" object might have the following attributes: name, address, phone number, balance, and credit limit. The methods associated with the customer object might be addnew (to add a new customer), addbalance (to increase the customer's balance to reflect a credit sale), deductbalance (to decrease the customer's balance to reflect a collection received from the customer), and showbalance (to show the customer's current outstanding balance). In an OOPL, the attributes and the methods are defined together in one package. This property of OOPLs is called encapsulation.

Objects can communicate with one another by means of messages passed between them. For example, when a new sales order is placed, a new instance of the "orders" object is created. After this new order instance has been created, messages would be passed to the "customer" object to update the customer's balance, and also the "inventory" object to decrease the on-hand quantity of the items ordered (and presumably to be shipped). In effect, the messages passed between objects trigger methods that have been defined and stored internally within each object. Another unique feature of OOPL is polymorphism. The same message passed to different objects might result in different actions depending on the exact specification of the method invoked within each object as a result of the object. For example, a "depreciate" message passed to several asset objects might result in different actions as a function of the depreciation method defined for that asset. 

A third feature unique to OOPL is inheritance. New objects can be created based on existing objects. The new objects can simply inherit the attributes and methods already defined for an existing object. Attributes and methods unique to the new object would be defined within the new object. As an example, a new "international customer" object can be created by inheriting the attributes and methods of an existing "customer" object. Only attributes and methods unique to international customers, such as the country and currency, would have to be defined in the new "international customers" object. In this manner, OOPL facilitates code reusability thereby simplifying the process of developing new applications. In summary, OOPL have three unique features: (1) encapsulation, (2) polymorphism, and (3) inheritance. Smalltalk and C++ are two popular object-oriented programming languages.

Applications software

Writing a program using a programming language such as C, C++, COBOL or Visual Basic is one way of converting raw data into useful information. However, the vast majority of users would more than likely use an applications software package to perform common data processing tasks such as spreadsheets and database programs. Applications software packages are designed with a host of features and are easily customizable to meet almost any user need. The two broad categories of applications software are (1) general purpose applications software such as word processing, spreadsheet, database, graphics, and communications, and (2) special purpose applications software such as accounting software packages. Both categories of software packages have several offerings for both the PC and Macintosh platforms.

General purpose applications software

You are probably already very familiar with word processing and spreadsheet software, and possibly with database software as well. Microsoft Word, Corel WordPerfect, and Lotus Ami Pro are the leading word processing software packages. Microsoft Excel, Corel Quattro Pro, and Lotus 1-2-3 are the leading spreadsheet packages. Microsoft Access, Corel Paradox, and Lotus Approach are among the major database software packages. All of these software packages listed are for the Microsoft Windows operating system, the latest version of which is Windows 98. Software programs such as Microsoft Powerpoint and Lotus Freelance Graphics are used to create presentation graphics. Data graphics -- presenting data graphically - is a function included in most spreadsheet software. 

Communications software and fax software such as Symantec’s WinFax Pro are also quite popular. However the functionality provided by such software packages is increasingly being integrated into the operating system, obviating the need to obtain separate software packages for that functionality. For example, Microsoft's Windows 95 operating system includes accessories for dialing in to remote computers using a modem and also for sending and receiving faxes using a fax modem. Other types of applications software include project management software such as Microsoft Project, personal information managers such as Lotus Organizer, and scheduling/meeting software such as Microsoft Schedule +.

Special purpose applications software

Although there are a host of special purpose applications software packages, such as packages for keeping track of real estate listings, we will focus exclusively on accounting software packages. Accounting software packages can be broadly categorized into three groups. The first category comprises low end packages for use by small business, are nothing more than sophisticated electronic checkbooks. Packages like Intuit Quicken, Quickbooks Pro, Peachtree, and Microsoft Money fall into this category. Many home users find packages like Quicken to be very useful for tracking their checking account use and to manage their finances. Some of these packages can be used by small businesses and include some very basic accounting functions. Most of these software packages can be purchased for under $200. Installing and configuring these low end packages is also relatively easy. 

The second category comprises mid-range packages such as Macola, Great Plains Dynamics, and SBT. These packages can cost anywhere from $5,000 to $15,000 and usually require the expertise of a consultant or a "value added reseller" (VAR) to install and configure the package. Most medium sized businesses will likely find that one of these packages will meet their accounting information processing needs. It should be noted that these packages are considered "modular" in that separate modules, such as inventory, payroll, and general ledger, can often be purchased separately. Subsequently, when the company grows and intends to automate additional accounting processes, the remaining modules from the same package can be purchased and integrated along with the existing modules. The first two categories of software almost always use proprietary file management systems to manage the necessary files within the software packages. The data files are accessible only through the file manager interfaces provided by the accounting software package. 

The third category comprises high end packages such as SAP, Oracle Applications, PeopleSoft, and Baan. These software packages are referred to as enterprise resource planning (ERP) systems since they typically span the entire enterprise and address all of the enterprise's resources.  Depending on the configuration, these packages can cost a company several hundreds of thousands of dollars. Taking into account the cost of analyzing and redesigning existing business processes, the cost of implementing an ERP system can run into millions of dollars! Just how much more sophisticated are ERP systems relative to some of the other packages in the first two categories? Take SAP for example.  This complex software is ideally suited for multinational companies that have operations in different countries with different currencies and accounting conventions. Employees throughout the world can obtain access to data regardless of where the data is located. SAP also automatically handles foreign currency translations as well as the reconciliations that are necessary between countries that have different accounting conventions. A key feature of ERP systems is cross-functional integration. For example, for a manufacturing enterprise, an ERP system like SAP can be configured to automatically react to the creation of a new customer order by (1) updating the production schedule, (2) updating the shipping schedule, (3) ordering any needed parts, and (4) updating online sales analysis reports to reflect the new order.  Without an ERP system, the four procedures indicated would have to be performed by employees in at least four different departments (sales, production, inventory, purchasing) perhaps using four different information systems.  It is precisely this fragmentation of information systems across the company that ERP systems are designed to correct.  Thus, the key advantage of an ERP system is the integration of related business processes. This cross-functional integration is enabled chiefly using relational database technology. You can therefore imagine that ERP systems such as SAP must indeed be very complex. 

Unlike accounting packages in the first two categories, the high end packages such as SAP almost always use relational databases to store the raw data. Thus, the data is accessible not only via the accounting package, but also through the relational database management system. This ability to access the data via the database management system allows for much greater flexibility in accessing and analyzing the data. File-oriented data structures, such as the file managers alluded to above, and database-oriented data structures such as relational database structures. The following table summarizes the various software categories.



Software Categories

Systems software

Applications software

Operating system

Word processing 

Utility programs


Language translators (interpreters and compilers) 

Data base management

Programming languages (first, second, third, fourth, and object-oriented) 









In this section of the chapter we discuss the basic classes of computers and bring together the above discussion of hardware and software. There are at least seven different classes of computers in use today. They are super computers, mainframe computers, mini computers, servers, workstations, desktop computers, and notebook computers. Although these classes are somewhat arbitrary, the major way in which these computers are typically categorized is their performance (i.e., how fast do they serve the needs of users). One of the more common comparative variables (a speed rating) is MFLOPS (millions of floating point instructions per second). The following discussion is not intended to be comprehensive but rather presented for comparative purposes.


Type of Computer


Speed (MFLOPS),
Operating systems

Distinguishing features


Super Computer

Cray SV1

($1M - $30M)

1000 - 2,000,000


* Multiple parallel processors 

* Terabytes of data storage
* Gigabytes of main memory 


* Numerically intensive scientific calculations

* Interrogation of extremely large data sets 

Mainframe Computer

IBM S390

($500,000 - $10 M)

500 - 20,000

OS/390, VM/ESA

* Multiple  processors 

* Terabytes of data storage
* Gigabytes of main memory
* Ability to handle thousands of concurrent users 

* Large business general data processing 

* Server in client sever applications 

Mini Computer

IBM AS/400

($50,000 - $500,000)

250 - 1,000


* Multiple processors

* Up to a terabyte of data storage
* Up to two gigabytes of main memory
* Ability to handle hundreds of concurrent users 

* Server in client/server applications 

* Midsize business general processing
* Scientific computing in universities 



Compaq, Gateway, Dell, Hewlett Packard, Sun Microsystems 

($4,000 - $50,000)

200 – 500


Windows NT Server

* Multiple processors

* 10 - 40 GB Ultra SCSI drive
* 256 MB to 2 GB of ECC RAM
* 100 Mbps networking card
* redundant power supply
* RAID controller 

* Server in client/server applications 

* Server for local area network
* Web server 


Gateway, Dell, Sun Microsystems (Sun Ultra 450)

($2,500 - $10,000)

200 - 500

Windows NT workstation

* Single or dual processor 

* 10 - 18 GB Ultra ATA drive
* 128 MB or more RAM
* High-end video card 

* Computationally intensive applications (CAD/CAM, graphics design) 

Desktop computer 

Gateway, Dell, Micron, IBM

($1,000 - $4,000)

50 - 400

Windows 95/98, Windows NT workstation

* Single processor

* 6 - 14 GB Ultra ATA drive
* 64 MB RAM
* Network adapter card
* Multimedia capabilities

* Personal computing

* Client in Client/server applications 

Notebook computer 

Toshiba Tecra, IBM Thinkpad, Gateway Solo, Dell Latitude

($1,500 - $5,000)

50 - 300

Windows 95/98,Windows NT workstation

* Active matrix (thin film transistor - TFT) display

* 2 - 4 GB ATA drive
* 32 MB RAM
* Long life battery
* Credit card size "PC card" modem, network adapter 

* Mobile computing

* Client in client/server applications 


An additional category of personal computers includes palmtop computers and personal digital assistants (PDA). These devices are smaller than notebook (laptop) computers.  Palmtops, also referred to as "handheld PCs," are small hand-held devices with fully functional yet small keyboards. Some of these palmtops come with color screens. Many of the palmtop manufacturers, like Hewlett Packard, have adopted the Microsoft Windows CE operating system. 

Personal digital assistants (PDA) are widely used and are likely at the forefront of the next wave of personal computing devices.  Compaq, for example, suggests that by the year 2005, 80% of their product mix will be focused on PDA like devices.  The type of Operating System (OS) that is utilized perhaps best delineates PDAs. The two predominant operating systems for these devices are the 3-Com "Palm Operating System" and the "Microsoft Windows CE".  The Palm OS is used on devices like the 3-Com Palm Pilot VII and the HandSpring Visor.  Windows CE is used on a number of "Pocket PC" devices like the HP Jornada.  The "Pocket PC" is a Microsoft label applied to PDA devices using Windows CE.  Microsoft has just recently upgraded this operating system.  PDA devices running Windows CE have not been well received in the market place.  At this point in time, Palm devices are clearly preferred over the CE devices.  These devices allow the user to synchronize with information resources running on the PC, like email and personal organizers, and assuming connectivity, Web based information resources (see Avantgo.com).

The performance of a specific computer and the category in which it appears will clearly vary over time. What we now characterize today as a mainframe computer will very likely be classified as a personal computer a few years from now if we just consider the performance aspect of the machine. 

It is appropriate to note that many of the machines described above form the basic building blocks of networked environments. These networked environments are discussed in more detail in the next chapter. What is appropriate at this point is to recognize that a machine designated as a "server" is a computer that provides the major processing capabilities in a networked environment. This machine can typically be configured with multiple processors, a considerable amount of RAM, and fairly sizable SCSI hard drives. In networked environments, servers exist to provide connectivity to the clients and to other machines outside the network, printing facilities, large amounts of disk space, and application programs. If the server machine is in the microcomputer or workstation class, these machines will typically use the Microsoft Windows NT or UNIX operating systems. UNIX is an operating system originally designed by Bell Laboratories and is widely used across a range of machine types. UNIX is a difficult operating system to learn, use and manage. "Client" machines are those computers that are connected in some way to a "server." Right now, as you sit and access the Cybertext Publishing web site, the machine you are using is a "client" machine and the dual processor Pentium machine at Cybertext is the "server." 

The blending of hardware, software and people to accomplish a specific set of tasks can in some instances be a very simple or highly complex. For example, the need to bring an enterprise-wide solution like SAP to facilitate management decision-making can be a very, very expensive undertaking. Yet the need to do basic data analysis using a spreadsheet program on a personal computer can be a straightforward and relatively inexpensive proposition. Underlying both of these examples is a basic premise that the user's needs drive the determination of which software will be employed and the software will determine the hardware to be employed. If you are called upon to recommend computer hardware, you should first examine the nature of the applications to be executed using the hardware.  By reference to the above table, you can then determine the specific type of hardware configuration that would meet the user's needs (e.g., server, workstation, or personal computer).


This chapter focused on information technology. Hardware concepts were described in terms of input, processing, storage, and output technologies. Regarding input, various technologies such as key input, mouse input, and automatic input of data using bar code scanners and similar devices were described. Newer technologies such as voice input were also discussed. The central processing unit was then discussed in terms of its components such as the arithmetic-logic unit, the memory unit, and the control unit. Recent processor technologies such as reduced instruction set computing (RISC) were also discussed. Regarding storage technology, both temporary memory comprising random access memory, cache memory, and read-only memory, and permanent storage such as magnetic disks, CD-ROM, and tape were described. Newer storage technologies like Iomega's Zip drive and recordable CDs were also described. Hard copy options such as impact and non-impact printers, and soft copy output options such as screen display were discussed. Systems software was discussed in terms of the two broad categories of systems software and applications software. Systems software includes the operating system, utilities for performing functions like formatting disks, and language translators for converting source code programs into object code (machine language). A number of programming languages were also discussed, including the generations of languages from the first to the fourth generation. Special purpose application software, specifically accounting software packages, were discussed in some detail.  Alternative systems configurations were then presented, ranging from super computers to personal computers.  The chapter concluded by briefly discussing how users' needs should drive the choice of software and hardware.




Key Terms

Address bus
Applications software
Arithmetic-logic unit
Audio input
Bar code scanners
Cache memory
Central processing unit
Clock unit
Control bus
Control unit
Data bus
Dot Matrix
First generation programming language
Fourth generation languages
Impact printers
Ink jet
Internal bus
Keying device
Language translators
Laser printer
Light pens
Magnetic disks
Magnetic ink character readers
Magnetic tape
Memory unit
Non-impact printers
Object-oriented programming languages
Operating system
Optical character readers
Palm Tops
Personal Digital Assistants
Point of sale device
Random access memory (ram)
Read-only memory (rom)
SDRAM memory
Second generation programming language
Systems software
Third generation languages
Touch screen devices
Ultra ATA drive
Ultra SCSI drive
Utility programs
Video input




Key Web Sites


  • AMD -- A manufacturer of microprocessors. Providing stiff competition to Intel.
  • Apple -- A producer of computer hardware and software. The most notable product is the Apple MacIntosh.
  • Dell Computer Corporation -- A manufacturer of personal computers. This company is one of the predominant producers of personal computers. Started in a dorm room at the University of Texas at Austin.
  • Compaq -- One of the major players in the PC and server markets (acquired Digital Computer Corp. in 1998)
  • Gateway - A manufacturer of personal computers. This company is another leading producer of personal computers.
  • Hewlett Packard - A manufacturer of personal computers, printers, scanners and electronic instrumentation devices.
  • Intel - Designer and manufacturer of the Intel Pentium, Pentium Pro, and Pentium II microprocessors. 
  • International Business Machines - "Big Blue." Producer of the IBM ThinkPad laptop computer and the OS/2 WARP operating system for the personal computer. 
  • Motorola - Manufacturer of a range of electronic and semiconductor devices and microprocessors. PowerPC line of processors used in Apple's Power Macintosh line of personal computers. 


  • Corel -- Software manufacturer. Vendor of Corel Draw and WordPerfect. 
  • Microsoft - Software and hardware manufacturer. The Bill Gates Company. Producers of Windows 95, Windows NT, and the Microsoft Office suite of software programs. 
  • Oracle -- developer of a suite of programs called "Oracle Applications" (one of the applications is Oracle Financials -- a high end accounting package that interfaces with a relational database) 
  • SAP -- Developer of a popular high-end "enterprise resource planning" software--SAP R/3. 
  • Seagate -- Manufacturer of hard drives. 
  • TUCOWS -- The Ultimate Collection of Windows Software 
  • Western Digital -- Manufacturer of hard drives. 
  • Windows 95 Shareware -- A terrific site for lots of neat freeware and shareware for the personal computer. 


  • whatis.com -- An excellent site providing answers to virtually any "what is" question relating to technology 
  • Computer Dictionary - A first rate computer dictionary. 
  • ISWORLD Net -- a comprehensive site about information systems 
  • C|Net -- a "one stop shopping" site for anything to do with computers 
  • PC technology guide -- a web site offering explanations of various PC technologies 
  • PC Webopaedia -- an online encyclopedia of computer technology




Discussion Questions

  1. Distinguish between on-line and off-line data entry devices.
  2. Briefly describe the following input devices: mouse, trackball, and trackpad.
  3. Describe technologies that automate the data input process.
  4. What are the components of the central processing unit?
  5. Distinguish between random access memory, read only memory, and cache memory.
  6. Explain the function of each of the following components of the internal bus: data bus, address bus, and control bus.
  7. Distinguish between CISC and RISC processors.
  8. Distinguish between magnetic tape and magnetic disks.
  9. Identify the two primary interfaces for magnetic disks.
  10. What are some alternatives to the 1.44 megabyte 3.5" floppy disk drive for portable data storage for personal computers?
  11. What are some of the uses of CD-ROM drives for personal computers?
  12. Distinguish between impact and non-impact printers.
  13. What are the determinants of "good" video displays for computer systems?
  14. Distinguish between systems software and applications software.
  15. Describe the capabilities of present day operating systems for (a) mainframe computers and (b) personal computers.
  16. Giving examples, explain the concept of utility programs.
  17. What is the function of a language translator? Distinguish between compilers and interpreters.
  18. Distinguish between first, second, and third generation languages.
  19. What is an object-oriented programming language? Explain giving examples.
  20. What are the major categories of applications software? Provide examples of software in each category
  21. Distinguish between mainframe computers, mini-computers, servers, workstations, desktop computers, and notebook computers.




Problems and Exercises

1. In your first week at your new job your boss asks you to give her a "wish list" for a microcomputer. What specifications would you list for the processor, memory, video display, and the hard drive? You may make any assumptions regarding the types of applications you might be using.

 2. The basic elements of an information system include the following: (a) Input, (b) Processing, (c) Storage, and (d) Output. Classify each of the following items into one of the five previous categories. For some items more than one answer is possible. 

Optical disk: ________________ 

Point-of-sale recorder (POS): ________________ 

Floating-point unit: ___________________ 

Light pen: ____________________ 

3. Explain all the technical terms used in the following recent advertisement for a Dell Pentium PC.

Dimension 4100

Pentium III 733 Mhz

Intel Pentium III processor at 733MHz

19" (18.0" vis) M990 monitor

Intel Pentium III Processor at 733MHz

64MB SDRAM at 133MHz

10GB3  Ultra ATA Hard Drive (7200 RPM)

19" (18.0" vis) M990 monitor

16MB ATI Rage Pro Graphics

48X Max Variable CD-ROM Drive

SoundBlaster 64V PCI LC Sound Card with MusicMatch Jukebox

Altec Lansing ACS-340 Speakers with Subwoofer

V.90/56K4  PCI Telephony Modem for Windows

Microsoft Windows Millennium Edition

Microsoft Works Suite 2000 with Money 2000 Standard

Norton AntiVirus 2000

Microsoft Internet Keyboard, Dell Edition

Logitech MouseMan Wheel (PS/2v)

3.5" Floppy Drive

1 Year Next Business Day On-Site Parts and Labor,5  Years 2 and 3 Parts,6  BSC

4. Browse through Intel's Web site and identify an emerging topic of relevance to the discussion of processor or communications technology in this chapter. Send your instructor electronic mail highlighting your finding. 

5. Search the World Wide Web for information on the following topics related to information technology. Be prepared to make a brief presentation to the class about the results of your search. (Hint: you will find these terms explained in online glossaries, such as whatis.com or the PC webopaedia).  




         DLL file 





         Fibre Channel

 6. Visit the Web sites of Dell Computer Corp. and Gateway Computer Corp.  Specifically explore their "on-line systems configurators."  Assume that your boss has asked you to provide a "custom configuration" specifically for your company's needs.  Make assumptions about system requirements for your hypothetical company and experiment with different configurations to determine the effect of adding and/or subtracting features from the standard configuration.  Be sure that you understand the implications of each configuration change. 

7. Your father runs a small business. He needs a computer for his office at home. He also likes to “test drive” the Microsoft Flight Simulator. Assume that your father approaches you and asks about the possibility of leasing a computer.  He is sending you to the university and is a little bit short of cash at the present time. Visit the Web site of Dell Computer Corp and thoroughly investigate the lease versus buy options. While obviously a personal decision, what would you recommend to your father? Should he lease or buy a computer from Dell? What factors did you consider in your recommendation. As you contemplate these issues, consider the following. What interest rate does Dell appear to be charging on its leases? What options are available at the end of the lease? Is the Dell lease really a lease? Be sure to take the Lease Quiz on the Dell site.


  Last Updated: July 20, 2001




Copyright 1996-2001 CyberText Publishing, Inc. All Rights Reserved