DESIGN AND IMPLEMENTATION OF AN AUTOMATED COMPUTER HARDWARE DIAGNOSIS SYSTEM


Content

ABSTRACT

 

Computing hardware has become a platform for uses other than mere computation, such as process automation, electronic communications, equipment control, entertainment, education, etc. each field in turn has imposed its own requirements on the hardware, which has evolved in response to these requirements, such as the role of the touch screen to create a more intuitive and natural user interface. This guides in diagnosing and troubleshooting computer hardware.

Efficient methodical troubleshooting starts with a clear understanding of the expected behaviour of the system and the symptoms being observed. From there the troubleshooter forms hypotheses on potential causes, and devices (or perhaps reference a standardized checklist of) tests to eliminate these prospective cause.

 

Computer users are in the habit of taking their faulty system to the technician for repair and maintenance because they cannot troubleshoot (diagnose) the system. This study is to develop a window-based software application that can successfully guide and help computer users to diagnose, troubleshoot and fix their faulty computer system.

The front end of the application was developed using the following tools; .NET framework, Microsoft visual studio .NET 2003, servers including Microsoft Windows Server 2003, Microsoft SQL Server and Microsoft BizTalk.

The back end was based on Microsoft Access 2003.

The system has been developed with much care that, its free of errors and at the same time it is efficient and less time consuming.

 

 

TABLE OF CONTENTS

CHAPTER ONE

GENERAL INTRODUCTION

1.1       Background to the Study

1.1.1    The History of Computer Technology

1.1.2    Developmental Trends of Computer Hardware

1.1.3    System Diagnosis

1.2       Motivation for the Study

1.3       Aims and Objectives of the Study

1.4       Scope of the Study

1.5       Significance of the Study

1.6       Definition of Terms

 

CHAPTER TWO

REVIEW OF RELATED LITERATURE

2.1       Introduction

2.2       Installation of Hardware

2.3       Hardware Troubleshooting

2.4       Hardware Maintenance

 

CHAPTER THREE

ANALYSIS OF THE EXISTING SYSTEM

3.1       Introduction

3.2       Catalogues of Existing Computer Diagnosis Application

3.2.1    Registry Repair Software

3.2.2    Cworks PC Maintenance Software

3.2.3    EZ Maintenance Software

3.2.4    PC-Diag Professional PC Diagnostic Software Suite

3.2.5    PC-Doctor Network Factory

3.3       Operational Procedures of the Existing Computer Diagnostic Application

3.3.1    Registry Repair Software

3.3.2    Cworks PC Maintenance Software

3.3.3    PC-Diag Professional

3.3.4    PC-Doctor Network Factory

 

CHAPTER FOUR

SYSTEM METHODOLOGY (ANALYSIS AND DESIGN)

4.1       Introduction

4.2       Objective of the Design

4.3       System Design Specification

4.3.1    Menu design (Main menu)

4.3.2    Screen Design/Interface                        

4.3.3    Input Design

4.3.4    Output/Processing Design

4.3.5    Control Design

4.3.6    File Design and Database Specification

4.4       Analysis of the Proposed System

4.5       System Implementation and Documentation

4.6       Hardware and Software Requirements

4.6.1    Hardware Requirement

4.6.2    Software Requirement

4.7       Conversion Plan

4.7.1    Change-Over Procedure

4.8       Choice of Programming Language Used

4.9       System Documentation

 

CHAPTER FIVE

SUMMARY, CONCLUSION AND RECOMMENDATION

5.1       Summary

5.2       Conclusion

5.3       Recommendation

5.4       Suggestion for Further Study

REFERENCES

APPENDIX: Source Code

 

 

 

CHAPTER ONE

GENERAL INTRODUCTION

1.1       Background to the Study

            Computers were invented to “compute”: to solve “complex mathematical problems”. They still do that, but that is not why we are living in an information age. That reflects other things that computer do: store and retrieve data, manage networks of communications, process text, generate and manipulate image and sound, fly air and space craft, and so on. Deep inside a computer are circuits that do those things by transforming them into a mathematical language.

            One definition of a modern computer is that it is a system: an arrangement of hardware and software in hierarchical layers. Those who work with the system at one level do not see or care about what is happening at other levels. The highest levels are made up of “software” – by definition things that have no tangible form but are best described as methods of organisation.

1.1.1    The History of Computer Technology

            A complete history of computing would include a multitude of diverse devices such as the ancient Chinese abacus, the Jacquard Loom (1805) and Charles Babbage’s “analytical engine” (1834). It would also include discussion of mechanical, analog and digital computing architectures. As late as the 1960s, mechanical devices, such as the Marchant calculator, still found widespread application in science and engineering. During the early days of electronic computing devices, there was much discussion about the relative merits of analog versus digital computers. In fact, as late as the 1960s, analog computers were routinely used to solve systems of finite difference equations arising in oil reservoir modelling. In the end digital computing devices proved to have the power, economics and scalability necessary to deal with large scale computations. Digital computers now dominate the computing world in all areas ranging from the hand calculator to the supercomputer and are pervasive throughout society. Therefore, this brief sketch of the development of scientific computing is limited to the area of digital, electronic computers.

            The evolution of digital computing is often divided into generations. Each generation is characterized by dramatic improvements over the previous generation in the technology used to build the computers, the internal organisation of the computer systems, and the programming languages, although not usually associated with computer generations, there has been a steady improvement in algorithms, including algorithms used in computational science.

1.1.2    Developmental Trends of Computer Hardware

            The history of computing hardware is the record of the constant drive to make computer hardware faster, cheaper, and store more data. Before the development of the general purpose computer, most calculations were done by humans. Tools to help humans calculate were then called “calculating machine”, by proprietary names, or even as they are now called calculator. It was those humans who used the machines who were then called computers; there are pictures of enormous rooms filled with desks at which computers used their machines to jointly perform can, as for instance, aerodynamic ones required for in aircraft design.

            Computing hardware has become a platform for uses other than mere computation, such as process automation, electronic communications, equipment control, entertainment, education, etc. each field in turn has imposed its own requirements on the hardware, which has evolved in response to these requirements, such as the role of the touch screen to create a more intuitive and natural user interface.

 

 

Mainframe Computer

            A mainframe is simply a very large computer. Mainframe is an industry term for a large computer. The name comes from the way the machine is build up: all units (processing, communication etc.) were hung into a frame. Thus the maincomputer is built into a frame, therefore: mainframe. And because of the sheer development costs, mainframes are typically manufactured by large companies such as IBM, Amdahl, and Hitachi etc. Their main purpose is to run commercial applications of Fortune 1000 businesses and other large-scale computing purposes. Think here of banking and insurance businesses where enormous amounts of data are processed, typically (at least) millions of records, each day. A mainframe has 1 to 16 CPU's (modern machines more), memory ranges from 128 Mb over 8 Gigabyte on line RAM, its processing power ranges from 80 over 550 Mips, It has often different cabinets for storage, I/O, RAM, it has separate processes (program) for task management, program management, job managementserialization, catalogs, inter address space communication.

Historically, a mainframe is associated with centralized computing, opposite of distributed computing. Meaning all computing takes (physically) place on the mainframe itself: the processor section. Building mainframes started with the MarkI soon to be followed by tens of other types and manufacturers. Because of the development costs only governments and large firms could pay for the development of such behemoths.

Some early mainframes includes ENIAC (1942), MarkI(1944), BINAC (1949), Whirlwind (1960), UNIVAC (1952), IBM 701 (1953) IBM 360 (1963) etc.ENIAC has thirty separate units, plus power supply and forced-air cooling, weighed over thirty tons. Its 19,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors consumed almost 200 kilowatts of electrical power. Unlike the ENIAC, the UNIVAC processed each serially. But its much higher design speed permitted it to add two ten-digit numbers at a rate of almost 100,000 additions per second. It was the first mass-produced computer. The central complex of the UNIVAC was about the size of a one-car garage: 14 feet by 8 feet by 8.5 feet high. It was a walk-in computer. The vacuum tubes generated anenormous amount of heat, so a high capacity chilled water and blower air conditioning system was required to cool the unit. The complete system had 5200 vacuum tubes, weighed 29,000 pounds, and consumed 125 kilowatts of electrical power.

IBM 704 was the first large-scale commercially available computer system to employ fully automatic floating point arithmetic commands. It was a large-scale, electronic digital computer used for solving complex scientific, engineering and business problems and was the first IBM machine to use FORTRAN. The 704 and the 705 were the first commercial machines with core memories.IBM 705 was developed primarily to handle business data; it could multiply numbers as large as one billion at a rate of over 400 per second. In a 1954 IBM publication, the 705 was credited with "Forty thousand or twenty thousand characters of high-speed magnetic core storage; Any one of the characters in magnetic core storage can be located or transferred in 17millionths of a second; Any one of these characters is individually addressable."

In the early days output came via a paper tape. Later by an array of burning lamps and when the vacuumtube technology became sophisticated enough to build a CRT output came by means of spots on the screen. But a mainframe does have some particular properties that make it to stand out:

·         It manages a large number of users.

·         Distributes the sheer workload that can be handled by the machine over different processors and in/output devices.

·         All processes are running on the host and not on the terminal.

·         Output is sent to the terminal through a program running (in background) on the host (mainframe). Nothing else goes over the line. It is like you are connected to a large computer by long wires. That is also the reason why it seems that your keyboard typing sometimes appears slower on your monitor then you actually type

Many scientists have contributed to the mainframe computer as it is now. Things did not go as smooth and fast as it goes nowadays. Sometimes many items, mechanisms, or materials still had to be invented before things really got on their way. On line memory was a crucial phase in developing large computers. Also when timesharing was invented in the late 60'smainframe use exploded.A (modern) mainframe is still a very large machine, sometimes tens of square meters. Has usually more than one processorand loads of memory: often running between a few mega to several hundreds Gigabytes of RAM.It has tons of disk space and other storage facilities in large size and quantities that are not normally found with mini or microcomputers. And although it looks like hundreds of users are using the machine simultaneously it is all governed by a sophisticated time sharing system, hence: serialization (per processor).

Minicomputers

            The minicomputer lasted from 1960 through 1980. Its purpose was to offer a cost-efficient alternative to room size mainframe computers. It was a third generation computer technology that serves as an interim size and solution between mainframes and microcomputers. Before the invention of microcomputers, two major inventions paved its way. The transistor, developed by Bell Labs in 1947 and the integrated circuit of late 1950s, improved the efficiency and reduced the size of electronic components. The integrated circuit was first introduced in 1958 by Jack Kilby of Texas instruments. It was further developed on silicon six months later by Robert Noyce, co-founder of Fairchid semiconductor.

            The minicomputer first hit the market in 1960 with Digital Equipment’s PDP – 1 (program, data, and processor). It was the first commercial computer that came with a monitor and keyboard. Historically, a minicomputer is associated with de-centralized computing – meaning most computing takes (physically0 place on the mini itself.

            Microcomputers are introduced in the early 1960s and announced a new era in computing. They are relatively low cost and small. This setup allowed more people to have access to computers and as a result a spurt of new applications in Universities, industry, and commerce are created.

Microcomputer

            A microcomputer is a computer with a microprocessor as its central processing unit. They are physically small compare to mainframes and minicomputers. Many microcomputers (when equipped with keyboard and screen for input and output respective) are also called personal computer (in general sense). The term “microcomputer” came into popular use after the microcomputer replaces the many separate components that made up the minicomputer CPU with one integrated microcomputer chip. This term “microcomputer” was also seriously employed for the first time designating the Micral N as first solid state machine designed with microprocessor. However, as microprocessors and semiconductor memory became less expensive, microcomputers in turn grew cheaper and easier to use.

In common usage, "microcomputer" has been largely supplanted by the description "personal computer" or "PC," which describes that it has been designed to be used by one person at a time. Since the advent of microcontrollers (monolithic integrated circuits containing RAM, ROM and CPU allon-board), the term "micro" is more commonly used to refer to that meaning. A microcomputer comes equipped with at least one type of data storage, usually RAM. Although some microcomputers (particularly early 8-bit home micros) perform tasks using RAM alone, some form of secondary storage is normally desirable. In the early days of home micros, this was often a data cassette deck (in many cases as an external unit). Later, secondary storage (particularly in the form of floppy disk and harddisk drives) were built into the microcomputer case.

The period from about 1971 to 1976 is sometimes called the first generation of microcomputers. These machines were for engineering development and hobbyist personal use. In 1975, the Processor Technology SOL-20 was designed, which consisted of one board which included all the parts of the computer system. The SOL-20 had built-in EPROM software which eliminated the need for rows of switches and lights. By 1977, the introduction of the second generation, known as home computers, made microcomputers considerably easier to use than their predecessors because their predecessors' operation often demanded thorough familiarity with practical electronics. The ability to connect to a monitor (screen) or TV set allowed visual manipulation of text and numbers. Microcomputers are the driving technology behind the growth of personal computers and workstations.

1.1.3    System Diagnosis

            Diagnosis in medical term, as defined by Wikipedia, refers to the process of attempting to determine and/or identify a possible disease or disorder and the opinion reached by this process. Diagnosis is the identification of the nature and cause of anything. Diagnosis is used in many different disciplines with variations in the use of logics, analytics, and experience to determine the cause and effect relationships. In system engineering and computer science, diagnosis is typically used to determine the cause of symptoms, mitigation for problems, and solutions to issues.

            The two words “diagnosis” and “troubleshooting” are often used interchangeably (Sullivan Mike 1982). Troubleshooting is a form of problem solving, often applied to repair failed product or processes. It is a logical, systematic search for the source of a problem so that it can be solved, and so the product or process can be made operational again. Troubleshooting is needed to develop and maintain complex system where the symptoms of a problem can have possible cause.

            In general, troubleshooting is the identification of, or diagnosis of “trouble” in the management flow of corporation or a system caused by a failure or some kind. The problem is initially described as symptoms of malfunction, and troubleshooting is the process of determining and remedying to the cause of these symptoms. Any unexpected or undesirable behaviour is a symptom. Troubleshooting is the process of isolating the specific cause or causes of the symptom. One of the core principles of troubleshooting is that reproducible problems can be reliably isolated and resolved. Often, considerable effort and emphasis in troubleshooting is placed on reproducibility on finding a procedure to reliably induce the symptom to occur. Once this is done then systematic strategies can be employed to isolate the cause or causes of a problem; and the resolution generally involves repairing or replacing those components which are at fault.

            Some of the most difficult troubleshooting issues related to symptoms that are only intermittent. Most discussion of troubleshooting, and especially training in formal troubleshooting procedures, tends to be domain specific, even though the basic principles are universally applicable. Usually troubleshooting/diagnosis is applied to something that has suddenly stopped working, since its previously working state forms the expectations about it continued behaviour. So the initial focus is often on recent changes to the system or to the environment in which it exists. For example a printer that was working when it was plugged in over there. However, there is a well-known principle that correlation does not imply causality. (For example the failure of a device shortly after it’s been plugged into a different outlet doesn’t necessarily mean that the events were related. The failure could have been a matter of coincidence.) The troubleshooting/diagnosis demands critical thinking rather than magical thinking.

            A basic principle in troubleshooting is to start from the simplest and most probable possible problem first. But this should not be taken as an affront; rather it should serve as a reminder or conditioning to always check this simple ting first before calling for help. A troubleshooter could check each component in a system one by one; substituting known good components for each potentially suspect one. However, this process of “serial substitution” can be considered degenerate when components are substituted without hypothesis concerning how their failure could result in the symptoms being diagnosed.

            Efficient methodical troubleshooting starts with a clear understanding of the expected behaviour of the system and the symptoms being observed. From there the troubleshooter forms hypotheses on potential causes, and devices (or perhaps reference a standardized checklist of) tests to eliminate these prospective cause. Two common strategies used by troubleshooters are to check for frequently encountered easily tested conditions first (for example checking to ensure that a printer’s light is one and that its cable is firmly seated at both ends), and to “bisect” the system (for example in a network printing system, checking to see if the job reached the server to determine whether a problem exists in the subsystems “towards” the user’s end or “towards” the device). The latter technique can be particularly efficient in systems with long chains of serialized dependencies or interactions among its components. It’s simply the application of binary search across the range of dependences and is often referred to as “half splitting”. It also helps to start from a known good state, the best example being a computer reboot. A cognitive walkthrough is also a good thing to try. Comprehensive documentation produced by proficient technical writers is very helpful, especially if it provides a theory of operation for the subject device or system.

            Troubleshooting can take the form of a systematic checklist, troubleshooting procedure, flowchart or table that is made before a problem occurs. Developing troubleshooting procedures in advance allows sufficient thought about the steps to take in troubleshooting and organizing the troubleshooting into the most efficient troubleshooting process. Troubleshooting tables can be computerized to make them more efficient to users.

1.2       Motivation for the Study

            Formally, computer users are in the habit of taking their faulty system to the technician for repair and maintenance because they cannot troubleshoot (diagnose) the system. The motivation for this study is to develop a window-based software application that can successfully guide and help computer users to diagnose, troubleshoot and fix their faulty computer system.

1.3       Aims and Objectives of the Study

            The aim of the study is to design and implement an Automated (Computerised) Computer Hardware Diagnosis System that will help computer users to analyse computer faults and provide relative solutions to those problem. Also, this study purpose to develop a system that would allow users to:

·         Troubleshoot their personal computer(s) with a view to analysing the system malfunctioning.

·         Help users to personally repair any malfunction computer system and accessories.

·         Provide preventive maintenance programme to address the needs of business that do not have qualified computer technician or staff.

·         Greatly reduce the risk of data loss or hardware failure.

·         Make working with computer system faster, interesting and engaging to users.

·         Increase business profit as a result of spending less amount of money on system maintenance and repair.

1.4       Scope of the Study

            This project, through a developed (window-based) application and a well designed database will point out the known problems that may be affecting computer system and it various components. However, the scope of this project work will only cover mainly computer hardware system such as system unit and its various internal components, computer peripherals etc.

1.5       Significance of the Study

            This project work, when fully implemented, will make process of troubleshooting to be computerized. It will serve as a reference checklist for computer technicians. Also, it will be a guide for computer users during computer maintenance and repair. Lastly, it will reduce brute-force method of diagnosing computer faults.           

1.6       Definition of Terms

Abacus: A calculating device, probably of Babylonian origin, that was long important in commerce.

Algorithm: Systematic procedure that produces – in a finite number of steps – the aswer to a question or the solution of a problem.

CPU: Central Processing Unit

RAM: Random Access Memory

MIPS: Millions of Operation per Seconds

i/o: input/output

ENIAC: Electronic Numeric Integrator and Calculator

UNIVAC: Universal Automatic Computer

Vacuum Tube: Electronic device, consisting of a glass or steel vacuum envelope and two or more electrodes between which electrons can move freely.

Transistor: In electronic, common name for a group of electronic device used as amplifiers or oscillators in communications, control, and computer systems.

Troubleshooting: The act or process of identifying and eliminating problems or faults, especially in electronic or computer equipment.

Maintenance: Continuing repair work i.e. works that is done regularly to keep a machine, building, r piece of equipment in good condition and working order.

Peripheral: A term used for devices, such as disk drives, printers, modems, and joysticks, that are connected to a computer and controlled by its microprocessor.

Computer Accessories: A peripheral or add-on to

Order Complete Project