f:\12000 essays\technology & computers (295)\2000 Problem.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Fiction, Fantasy, and Fact: "The Mad Scramble for the Elusive Silver Bullet . . . and the Clock Ticks Away." Wayne Anderson November 7, 1996 The year 2000 is practically around the corner, promising a new era of greatness and wonder . . . as long as you don't own a computer or work with one. The year 2000 is bringing a Pandora's Box of gifts to the computer world, and the latch is slowly coming undone. The year 2000 bug is not really a "bug" or "virus," but is more a computer industry mistake. Many of the PC's, mainframes, and software out there are not designed or programmed to compute a future year ending in double zeros. This is going to be a costly "fix" for the industry to absorb. In fact, Mike Elgan who is the editor of Windows Magazine, says " . . . the problem could cost businesses a total of $600 billion to remedy." (p. 1) The fallacy that mainframes were the only machines to be affected was short lived as industry realized that 60 to 80 million home and small business users doing math or accounting etc. on Windows 3.1 or older software, are just as susceptible to this "bug." Can this be repaired in time? For some, it is already too late. A system that is devised to cut an annual federal deficit to 0 by the year 2002 is already in "hot water." Data will become erroneous as the numbers "just don't add up" anymore. Some PC owners can upgrade their computer's BIOS (or complete operating system) and upgrade the OS (operating system) to Windows 95, this will set them up for another 99 years. Older software however, may very well have to be replaced or at the very least, upgraded. The year 2000 has become a two-fold problem. One is the inability of the computer to adapt to the MM/DD/YY issue, while the second problem is the reluctance to which we seem to be willing to address the impact it will have. Most IS (information system) people are either unconcerned or unprepared. Let me give you a "short take" on the problem we all are facing. To save storage space -and perhaps reduce the amount of keystrokes necessary in order to enter the year to date-most IS groups have allocated two digits to represent the year. For example, "1996" is stored as "96" in data files and "2000" will be stored as "00." These two-digit dates will be on millions of files used as input for millions of applications. This two digit date affects data manipulation, primarily subtractions and comparisons. (Jager, p. 1) For instance, I was born in 1957. If I ask the computer to calculate how old I am today, it subtracts 57 from 96 and announces that I'm 39. So far so good. In the year 2000 however, the computer will subtract 57 from 00 and say that I am -57 years old. This error will affect any calculation that produces or uses time spans, such as an interest calculation. Banker's beware!!! Bringing the problem closer to the home-front, let's examine how the CAPS system is going to be affected. As CAPS is a multifaceted system, I will focus on one area in particular, ISIS. ISIS (Integrated Student Information System) has the ability to admit students, register them, bill them, and maintain an academic history of each student (grades, transcripts, transfer information, etc.) inside of one system. This student information system has hundreds and hundreds of references to dates within it's OS. This is a COBOL system accessing a ADABAS database. ADABAS is the file and file access method used by ISIS to store student records on and retrieve them from. (Shufelt, p.1) ADABAS has a set of rules for setting up keys to specify which record to access and what type of action (read, write, delete) is to be performed. The dates will have to have centuries appended to them in order to remain correct. Their (CAPS) "fix" is to change the code in the Procedure Division (using 30 as the cutoff >30 century = "19" <30 century = "20"). In other words, if the year in question is greater than 30 (>30) then it can be assumed that you are referring to a year in the 20th century and a "19" will be moved to the century field. If the year is less than 30 (<30) then it will move a "20" to the century field. If absolutely necessary, ISIS will add a field and a superdescriptor index in order to keep record retrieval in the order that the program code expects. The current compiler at CAPS will not work beyond the year 2000 and will have to be replaced. The "temporary fix" (Kludge) just discussed (<30 or >30) will allow ISIS to operate until the year 2030, when they hope to have replaced the current system by then. For those of you with your own home computers, let's get up close and personal. This problem will affect you as well! Up to 80% of all personal PCs will fail when the year 2000 arrives. More than 80,000,000 PCs will be shut down December 31, 1999 with no problems. On January 1, 2000, some 80,000,000 PCs will go "belly up!" (Jager, p. 1) These computers will think the Berlin Wall is still standing and that Nixon was just elected President! There is however, a test that you can perform in order to see if you are on of the "lucky" minority that do not have a problem with the year 2000 affecting their PC. First, set the date on your computer to December 31, 1999. Next, set the time to 23:58 hours (if you use a 24 hour clock (Zulu time)) or 11:58 p.m. for 12 hour clocks. Now, Power Off the computer for at least 3 to 5 minutes. Note: ( It is appropriate at this time to utter whatever mantras or religious chants you feel may be beneficial to your psyche ). Next, Power On the computer, and check your time and date. If it reads January 1, 2000 and about a minute or two past midnight, breathe a sigh of relief, your OS is free from the year 2000 "bug." If however, your computer gives you wrong information, such as my own PC did (March 12, 1945 at 10:22 a.m.) welcome to the overwhelming majority of the population that has been found "infected." All applications, from spreadsheets to e-mail, will be adversely affected. What can you do? Maybe you can replace your computer with one that is Year 2000 compatible. Is the problem in the RTC (Real Time Clock), the BIOS, the OS? Even if you fix the hardware problem, is all the software you use going to make the "transition" safely or is it going to corrupt as well?! The answers to these questions and others like them are not answerable with a yes or a no. For one thing, the "leading experts" in the computer world cannot agree that there is even a problem, let alone discuss the magnitude upon which it will impact society and the business world. CNN correspondant Jed Duvall illustrates another possible "problem" scenario. Suppose an individual on the East Coast, at 2 minutes after midnight in New York City on January 1, 2000 decides to mark the year and the century by calling a friend in California, where because of the time zone difference, it is still 1999. With the current configurations in the phone company computers, the NewYorker will be billed from 00 to 99, a phone call some 99 years long!!! (p. 1) What if you deposit $100 into a savings account that pays 5% interest annually. The following year you decide to close your account. The bank computer figures your $100 was there for one year at 5% interest, so you get $105 back, simple enough. What happens though, if you don't take your money out before the year 2000? The computer will re-do the calculation exactly the same way. Your money was in the bank from '95 to '00. That's '00 minus '95, which equals a negative 95 (-95). That's -95 years at 5% interest. That's a little bit more than $10,000, and because of the minus sign, it's going to subtract that amount from your account. You now owe the bank $9,900. Do I have your attention yet??!! There is no industry that is immune to this problem, it is a cross-platform problem. This is a problem that will affect PCs, minis, and mainframes. There are no "quick fixes" or what everyone refers to as the "Silver Bullet." The Silver Bullet is the terminology used to represent the creation of an automatic fix for the Yk2 problem. There are two major problems with this philosophy. First, there are too many variables from hardware to software of different types to think that a "cure-all" can be found that will create an "across-the-board" type of fix. Secondly, the mentality of the general population that there is such a "fix" or that one can be created rather quickly and easily, is creating situations where people are putting off addressing the problem due to reliance on the "cure-all." The " . . . sure someone will fix it." type attitude pervades the industry and the population, making this problem more serious than it already is. (Jager, p. 1) People actually think that there is a program that you can start running on Friday night . . . everybody goes home, and Monday morning the problem has been fixed. Nobody has to do anything else, the Yk2 problem poses no more threat, it has been solved. To quote Peter de Jager, "Such a tool, would be wonderful. Such a tool, would be worth Billions of dollars. Such a tool, is a na ve pipe dream. Could someone come close? Not very . . . Could something reduce this problem by 90%? I don't believe so. Could it reduce the problem by 50%? Possibly . . . but I still don't believe so. Could it reduce the workload by 30%? Quite likely." (p. 2) Tools are available, but are only tools, not cures or quick fixes. How will this affect society and the industry in 2000? How stable will software design companies be as more and more competitors offer huge "incentives" for people to "jump ship" and come work for them on their problems!? Cash flow problems will put people out of business. Computer programmers will make big bucks from now until 2000, as demand increases for their expertise. What about liability issues that arise because company "A" reneged on a deal because of a computer glitch. Sue! Sue! Sue! What about ATM lockups, or credit card failures, medical emergencies, downed phone systems. This is a wide spread scenario because the Yk2 problem will affect all these elements and more. As is obvious, the dimensions to this challenge are apparent. Given society's reliance on computers, the failure of the systems to operate properly can mean anything from minor inconveniences to major problems: Licenses and permits not issued, payroll and social service checks not cut, personnel, medical and academic records malfunctioning, errors in banking and finance, accounts not paid or received, inventory not maintained, weapon systems malfunctioning (shudder!), constituent services not provided, and so on, and so on, and so on. Still think you'll be unaffected . . . highly unlikely. This problem will affect computations which calculate age, sort by date, compare dates, or perform some other type of specialized task. The Gartner Group has made the following approximations: At $450 to $600 per affected computer program, it is estimated that a medium size company will spend from $3.6 to $4.2 million to make the software conversion. The cost per line of code is estimated to be $.80 to $1. VIASOFT has seen program conversion cost rise to $572 to $1,204. ANDERSEN CONSULTING estimates that it will take them more than 12,000 working days to correct its existing applications. YELLOW CORPORATION estimates it will spend approximately 10,000 working days to make the change. Estimates for the correction of this problem in the United States alone is upward of $50 to $75 Billion dollars. (ITAA, p. 1) Is it possible to eliminate the problem? Probably not, but we can make the transition much smoother with cooperation and the right approach. Companies and government agencies must understand the nature of the problem. Unfortunately, the spending you find for new software development will not be found in Yk2 research. Ignoring the obvious is not the way to approach this problem. To assume that the problem will be corrected when the system is replaced can be a costly misjudgment. Priorities change, development schedules slip, and system components will be reused, causing the problem to be even more widespread. Correcting the situation may not be so difficult as it will be time consuming. For instance, the Social Security Administration estimates that it will spend 300 man-years finding and correcting these date references in their information systems - systems representing a total of 30 million lines of code. (ITAA, p. 3) Common sense dictates that a comprehensive conversion plan be developed to address the more immediate functions of an organization (such as invoices, pay benefits, collect taxes, or other critical organization functions), and continue from there to finish addressing the less critical aspects of operation. Some of the automated tools may help to promote the "repair" of the systems, such as in: * line by line impact analysis of all date references within a system, both in terms of data and procedures; * project cost estimating and modeling; * identification and listing of affected locations; * editing support to make the actual changes required; * change management; * and testing to verify and validate the changed system. (ITAA, p. 3) Clock simulators can run a system with a simulated clock date and can use applications that append or produce errors when the year 2000 arrives while date finders search across applications on specific date criteria, and browsers can help users perform large volume code inspection. As good as all these "automated tools" are, there are NO "Silver Bullets" out there. There are no quick fixes. It will take old fashioned work-hours by personnel in order to make this "rollover" smooth and efficient. Another area to look at are the implications for public health information. Public health information and surveillance at all levels of local, state, federal, and international public health are especially sensitive to and dependent upon dates for epidemiological (study of disease occurrence, location, and duration) and health statistics reasons. The date of events, duration between events, and other calculations such as age of people are core epidemiologic and health statistic requirements. (Seligman, p. 1) Along with this, public health authorities are usually dependent upon the primary data providers such as physician practices, laboratories, hospitals, managed care organizations, and out-patient centers etc., as the source for original data upon which public health decisions are based. The CDC (Centers for Disease Control and Prevention) for example, maintains over 100 public health surveillance systems all of which are dependent upon external sources of data. (Issa, p. 5) This basically means that it is not going to be sufficient to make the internal systems compliant to the year 2000 in order to address all of the ramifications of this issue. To illustrate this point, consider the following scenario: in April 2000, a hospital sends an electronic surveillance record to the local or state health department reporting the death of an individual who was born in the year "00"; is this going to be a case of infant mortality or a geriatric case?? Finally, let's look at one of the largest software manufacturing corporations and see what the implications of the year 2000 will be for Microsoft products. Microsoft states that Windows 95 and Windows NT are capable of supporting dates up until the year 2099. They also make the statement however: "It is important to note that when short, assumed dates (mm/dd/yy) are entered, it is impossible for the computer to tell the difference between a day in 1905 and 2005. Microsoft's products, that assume the year from these short dates, will be updated in 1997 to make it easier to assume a 2000-based year. As a result, Microsoft recommends that by the end of the century, all PC software be upgraded to versions from 1997 or later." (Microsoft, p. 1) PRODUCT NAME DATE LIMIT DATE FORMAT Microsoft Access 95 1999 assumed "yy" dates Microsoft Access 95 9999 long dates ("yyyy") Microsoft Access (next version) 2029 assumed "yy" dates Microsoft Excel 95 2019 assumed "yy" dates Microsoft Excel 95 2078 long dates ("yyyy") Microsoft Excel (next version) 2029 assumed "yy" dates Microsoft Excel (next version) 9999 long dates ("yyyy") Microsoft Project 95 2049 32 bits Microsoft SQL Server 9999 "datetime" MS-DOS(r) file system (FAT16) 2099 16 bits Visual C++(r) (4.x) runtime library 2036 32 bits Visual FoxPro 9999 long dates ("yyyy") Windows 3.x file system (FAT16) 2099 16 bits Windows 95 file system (FAT16) 2099 16 bits Windows 95 file system (FAT32) 2108 32 bits Windows 95 runtime library (WIN32) 2099 16 bits Windows for Workgroups (FAT16) 2099 16 bits Windows NT file system (FAT16) 2099 16 bits Windows NT file system (NTFS) future centuries 64 bits Windows NT runtime library (WIN32) 2099 16 bits Microsoft further states that its development tools and database management systems provide the flexibility for the user to represent dates in many different ways. Proper training of developers to use date formats that accommodate the transition to the year 2000 is of the utmost importance. For informational purposes, I have included a chart that represents the more popular Microsoft products, their date limits, and date formats. (Chart on previous page) (Microsoft, p. 3) So . . . is everyone affected? Apparently not. In speaking with the owners of St. John Valley Communications, an Internet-Access provider based in Fort Kent, they are eagerly awaiting the coming of 2000. They, Alan Susee and Dawn Martin had enough foresight to make sure that when they purchased their equipment and related software, that it would all be year 2000 compliant. It can be done, as evidenced by this industrious couple of individuals. The key is to get informed and to stay informed. Effect the changes you can now, and look to remedy the one's that you can't. The year 2000 will be a shocker and thriller for many businesses, but St. John Valley Communications seem to have it under control and are holding their partry hats in one hand and the mouse in the other. As is obviously clear from the information presented, Yk2 is a problem to be reckoned with. The wide ranging systems (OS) and software on the market lend credence to the idea that a "silver bullet" fix is a pipe dream in the extreme. This is not however, an insurmountable problem. Efficient training and design is needed, as well as a multitude of man-hours to effect the "repairs" needed to quell the ramifications and repercussions that will inevitably occur without intervention from within. The sit back and wait for a cure-all approach will not work, nor is it even imaginable that some people (IS people) with advanced knowledge to the contrary, would buy into this propaganda of slow technological death. To misquote an old adage, "The time for action was 10 years ago." Whatever may happen, January 1, 2000 will be a very interesting time for some, a relief for others . . . and a cyanide capsule for the "slackers." What will you do now that you are better "informed?" Hopefully you will effect the necessary "repairs and pass the word to the others who may be taking this a little too lightly. It may not be a matter of life or death, but it sure as heck could mean your job and financial future. WORKS CITED Elgan, Mike. "Experts bemoan the denial of "2000 bug"." Http://www.cnn.com/2000. ( 31 October 1996). Jager, Peter de. "DOOMSDAY." Http://www.year2000.com/doom (2 November 1996). * " Believe me it's real ! Early Warning." Http://www.year2000.com (4 November 1996). * " Biting the Silver Bullet." Http://www.year2000.com/bullet (2 November 1996). Shufelt, Ursula. "Yk2." Ursula@maine.maine.edu. ( 7 November 1996). Duvall, Jed. "The year 2000 does not compute." Http://www.cnn.com/news (3 November 1996). ITAA. "The Year 2000 Software Conversion: Issues and Observations." Http://www.itaa.org/yr2000-1.htm ( 7 November 1996). Seligman, James & Issa, Nabil. "The Year 2000 Issue: Implications for Public Health Information and Surveillance Systems." Http://www.cdc.gov/year2000.htm (9 November 1996). Microsoft. "Implications of the Year 2000 on Microsoft Products." Http://army.mil/army-yk2/articles/y2k.htm (9 November 1996). 2 f:\12000 essays\technology & computers (295)\A Brief History of Databases.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Brief History Of Data Bases In the 1960's, the use of main frame computers became widespread in many companies. To access vast amounts of stored information, these companies started to use computer programs like COBOL and FORTRAN. Data accessibility and data sharing soon became an important feature because of the large amount of information recquired by different departments within certain companies. With this system, each application owns its own data files. The problems thus associated with this type of file processing was uncontrolled redundancy, inconsistent data, inflexibility, poor enforcement of standards, and low programmer maintenance. In 1964, MIS (Management Information Systems) was introduced. This would prove to be very influential towards future designs of computer systems and the methods they will use in manipulating data. In 1966, Philip Kotler had the first description of how managers could benefit from the powerful capabilities of the electronic computer as a management tool. In 1969, Berson developed a marketing information system for marketing research. In 1970, the Montgomery urban model was developed stressing the quantitative aspect of management by highlighting a data bank, a model bank, and a measurement statistics bank. All of these factors will be influential on future models of storing data in a pool. According to Martine, in 1981, a database is a shared collection of interrelated data designed to meet the needs of multiple types of end users. The data is stored in one location so that they are independent of the programs that use them, keeping in mind data integrity with respect to the approaches to adding new data, modifying data, and retrieving existing data. A database is shared and perceived differently by multiple users. This leads to the arrival of Database Management Systems. These systems first appeared around the 1970=s as solutions to problems associated with mainframe computers. Originally, pre-database programs accessed their own data files. Consequently, similar data had to be stored in other areas where that certain piece of information was relevant. Simple things like addresses were stored in customer information files, accounts receivable records, and so on. This created redundancy and inefficiency. Updating files, like storing files, was also a problem. When a customer=s address changed, all the fields where that customer=s address was stored had to be changed. If a field happened to be missed, then an inconsistency was created. When requests to develop new ways to manipulate and summarize data arose, it only added to the problem of having files attached to specific applications. New system design had to be done, including new programs and new data file storage methods. The close connection between data files and programs sent the costs for storage and maintenance soaring. This combined with an inflexible method of the kinds of data that could be extracted, arose the need to design an effective and efficient system. Here is where Database Management Systems helped restore order to a system of inefficiency. Instead of having separate files for each program, one single collection of information was kept, a database. Now, many programs, known as a database manager, could access one database with the confidence of knowing that it is accessing up to date and exclusive information. Some early DBMS=s consisted of: Condor 3 dBaseIII Knowledgeman Omnifile Please Power-Base R-Base 4000 Condor 3, dBaseIII, and Omnifile will be examined more closely. Condor 3 Is a relational database management system that evolved in the microcomputer environment since 1977. Condor provides multi-file, menu-driven relational capabilities and a flexible command language. By using a word processor, due to the absence of a text editor, frequently used commands can automated. Condor 3 is an application development tool for multiple-file databases. Although it lacks some of the capabilities like procedure repetition, it makes up for it with its ease to use and quick decent speed. Condor 3 utilizes the advantages of menu-driven design. Its portability enables it to import and export data files in five different ASCII formats. Defining file structures is a relatively straightforward method by typing the field names and their length, the main part of designing the structure is about complete. Condor uses six data types: alphabetic alphanumeric C. numeric C. decimal numeric C. Julian date C. dollar Once the fields have been designed, data entry is as easy as pressing enter and inputting the respective values to the appropriate fields and like the newer databases, Condor too can use the Update, Delete, Insert, and Backspace commands. Accessing data is done by creating an index. The index can be used to perform sorts and arithmetic. dBaseIII DbaseIII is a relational DBMS which was partially built on dbaseII. Like Condor 3, dbaseIII is menu-driven and has its menus built in several levels. One of the problems discovered, was that higher level commands were not included in all menu levels. That is, dBaseIII is limited to only basic commands and anything above that is not supported. Many of the basic capabilities are easy to use, but like Condor, dBaseIII has inconsistencies and inefficiency. The keys used to move and select items in specific menus are not always consistent through out. If you mark an item to be selected from a list, once it=s marked it can not be unmarked. The only way to correct this is to start over and enter everything again. This is time consuming and obviously inefficient. Although the menus are helpful and guide you through the stages or levels, there is the option to turn off the menus and work at a little faster rate. DBaseIII=s command are procedural (function oriented) and flexible. It utilizes many of the common functions like: select records C. select fields C. include expressions ( such as calculations) C. redirect output to the screen or to the printer C. store results separately from the application Included in dBaseIII is a limited editor which will let you create commands using the editor or a word processor. Unfortunately, it is still limited to certain commands, for example, it can not create move or copy commands. It also has a screen design package which enables you to design how you want your screen to look. The minimum RAM requirement of 256k for this package really illustrates how old this application is. The most noticeable problem documented about dBaseIII is inability to edit command lines. If, for example, an error was made entering the name and address of a customer, simply backing up and correcting the wrong character is impossible without deleting everything up to the correction and re-entering everything again. DBaseIII is portable and straightforward to work with. It allows users to import and export files in two forms: fixed-length fields and delimited fields. It can also perform dBaseII conversions. Creating file structures are simple using the menus or the create command. It has field types that are still being used today by applications such as Microsoft Access, for example, numeric fields and memo fields which let you enter sentences or pieces of information, like a customer=s address, which might vary in length from record to record. Unlike Condor 3, dBaseIII is able to edit fields without having to start over. Inserting new fields or deleting old fields can be done quite easily. Data manipulation and query is very accessible through a number of built-in functions. The list and display commands enable you to see the entire file, selected records, and selected files. The browse command allows you to scroll through all the fields inserting or editing records at the same time. Calculation functions like sum, average, count, and total allow you to perform arithmetic operations on data in a file. There are other functions available like date and time functions, rounding, and formatting. Omnifile Omnifile is a single-file database system. This database is form oriented meaning that it has a master form with alternate forms attached to it. Therefore, you can work with one file and all of its subsets at the same time. The idea of alternating forms provides for a greater level of security, for example, if a user needed to update an address field, they would not be able to access any fields which displayed confidential information. The field in need of updating would only display the necessary or relevant information. Menus are once again present and used as a guide. The use of function keys allows the user to move about screens or forms quite easily. Menus are also used for transferring information, either for importing or for exporting. One inflexibility noted was that when copying files the two files must have the exact same fields in the same order as the master file. This can be problem if you want to copy identical fields from different files. Forms design is simple but tedious. Although it may seem flexible to be able to paint the screen in any manner that you wish, it can be time consuming because no default screen is available. Like other database management systems, the usual syntax for defining fields apply, field name followed by the length of the field in braces. However, editing is a little more difficult. Changing the form can be done by inserting and deleting, one character at a time. Omnifile does not support moving fields around, nor inserting blank lines. This means that if a field was to be added at the beginning of the record, the entire record would have to be re-entered. Records are added and viewed in the format that the user first designed it. Invalid entries are not handled very well. Entering an illegal value in a certain field results in a beep and no message, the user is left there to try and decide what the error is. Omnifile does support the ability to insert new records while viewing existing records and to make global or local changes. Querying can be performed by using an index or using a non-indexed search. If a search for a partial entry is made like ARob@ instead of ARobinson@, a message is then displayed stating that not an exact match was found. Overall These are just a few of the database programs that help start the whole database management system era. It is apparent that DBMS=s today still use some of the fundamentals first implemented by these >old= systems. Items like menus, forms, and portability are still key parts to current applications. However, programs have come along since then, but still have as their bases the same fundamental principles. f:\12000 essays\technology & computers (295)\A Brief History of Library Automation 19301996.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ An automated library is one where a computer system is used to manage one or several of the library's key functions such as acquisitions, serials control, cataloging, circulation and the public access catalog. When exploring the history of library automation, it is possible to return to past centuries when visionaries well before the computer age created devices to assist with their book lending systems. Even as far back as 1588, the invention of the French "Book Wheel" allowed scholars to rotate between books by stepping on a pedal that turned a book table. Another interesting example was the "Book Indicator", developed by Albert Cotgreave in 1863. It housed miniature books to represent books in the library's collection. The miniature books were part of a design that made it possible to determine if a book was in, out or overdue. These and many more examples of early ingenuity in library systems exist, however, this paper will focus on the more recent computer automation beginning in the early twentieth century. The Beginnings of Library Automation: 1930-1960 It could be said that library automation development began in the 1930's when punch card equipment was implemented for use in library circulation and acquisitions. During the 30's and early 40's progress on computer systems was slow which is not surprising, given the Depression and World War II. In 1945, Vannevar Bush envisioned an automated system that would store information, including books, personal records and articles. Bush(1945) wrote about a hypothetical "memex" system which he described as a mechanical library that would allow a user to view stored information from several different access points and look at several items simultaneously. His ideas are well known as the basis for hypertext and mputers for their operations. The first appeared at MIT, in 1957, with the development of COMIT, managing linguistic computations, natural language and the ability to search for a particular string of information. Librarians then moved beyond a vision or idea for the use of computers, given the technology, they were able make great advances in the use of computers for library systems. This lead to an explosion of library automation in the 60's and 70's. Library Automation Officially is Underway: 1960-1980 The advancement of technology lead to increases in the use of computers in libraries. In 1961, a significant invention by both Robert Noyce of Intel and Jack Kirby of Texas Instruments, working independently, was the integrated circuit. All the components of an electronic circuit were placed onto a single "chip" of silicon. This invention of the integrated circuit and newly developed disk and tape storage devices gave computers the speed, storage and ability needed for on-line interactive processing and telecommunications. The new potential for computer use guided one librarian to develop a new indexing technique. HP. Luhn, in 1961, used a computer to produce the "keyword in context" or KWIC index for articles appearing in Chemical Abstracts. Although keyword indexing was not new, it was found to be very suitable for the computer as it was inexpensive and it presented multiple access points. Through the use of Luhn's keyword indexing, it was found that librarians had the ability to put controlled language index terms on the computer. By the mid-60's, computers were being used for the production of machine readable catalog records by the Library of Congress. Between 1965 and 1968, LOC began the MARC I project, followed quickly by MARC II. MARC was designed as way of "tagging" bibliographic records using 3-digit numbers to identify fields. For example, a tag might indicate "ISBN," while another tag indicates "publication date," and yet another indicates "Library of Congress subject headings" and so on. In 1974, the MARC II format became the basis of a standard incorporated by NISO (National Information Standards Organization). This was a significant development because the standards created meant that a bibliographic record could be read and transferred by the computer between different library systems. ARPANET, a network established by the Defense Advanced Research Projects Agency in 1969 brought into existence the use of e-mail, telnet and ftp. By 1980, a sub-net of ARPANET made MELVYL, the University of Californiaís on-line public access catalog, available on a national level. ARPANET, would become the prototype for other networks such as CSNET, BITNET, and EDUCOM. These networks have almost disappeared with the evolution of ARPANET to NSFNET which has become the present day Internet. During the 1970's the inventions of the integrated computer chip and storage devices caused the use of minicomputers and microcomputers to grow substantially. The use of commercial systems for searching reference databases (such as DIALOG) began. BALLOTS (Bibliographical Automation of Large Library Operations) in the late 1970's was one of the first and later became the foundation for RLIN (the Research Libraries Information Network). BALLOTS was designed to integrate closely with the technical processing functions of the library and contained four main files: (1)MARC records from LOC; (2) an in-process file containing information on items in the processing stage; (3) a catalog data file containing an on-line record for each item; and (4) a reference file. Further, it contained a wide search retrieval capability with the ability to search on truncated words, keywords, and LC subject headings, for example. OCLC, the On-line Computer Library Center began in 1967, chartered in the state of Ohio. This significant project facilitated technical processing in library systems when it started it's first cooperative cataloging venture in 1970. It went on-line in 1971. Since that time it has grown considerably, providing research and utihypermedia. In order to have automation, there must first be a computer. The development of the computer progressed substantially from 1946 to 1961, moving quickly though a succession of vacuum tubes, transistors and finally to silicon chips. From 1946 to 1947 two significant computers were built. The ENIAC I (Electronic Numerical Integrator and Calculator) computer was developed by John Mauchly and J. Presper Eckert at the University of Pennsylvania. It contained over 18,000 vacuum tubes, weighed thirty tons and was housed in two stories of a building. It was intended for use during World War II but was not completed in time. Instead, it was used to assist the development of the hydrogen bomb. Another computer, EDVAC, was designed to store two programs at once and switch between the sets of instructions. A major breakthrough occurred in 1947 when Bell Laboratories replaced vacuum tubes with the invention of the transistor. The transistors decreased the size of the computer, and at the same time increased the speed and capacity. The UNIVAC I (Universal Automatic Computer) became the first computer using transistors and was used at the U.S. Bureau of the Census from 1951 until 1963. Software development also was in progress during this time. Operating systems and programming languages were developed for the computers being built. Librarians needed text-based computer languages, different from the first numerical languages invented for the number crunching "monster computers", in order to be able to use colities designed to provide users with the ability to access bibliographic records, scientific and literary information which continues to the present . Library Automation 1980-present The 70's were the era of the dummy terminal that were used to gain access to mainframe on-line databases. The 80's gave birth to a new revolution. The size of computers decreased, at the same time, technology provided faster chips, additional RAM and greater storage capacity. The use of microcomputers during the 1980's expanded tremendously into the homes, schools, libraries and offices of many Americans. The microcomputer of the 80's became a useful tool for librarians who put to them to use for everything from word processing to reference, circulation and serials. On-line Public Access Catalogs began to be used extensively the 1980's. Libraries started to set-up and purchase their own computer systems as well as connect with other established library networks. Many of these were not developed by the librarians themselves, but by vendors who supplied libraries with systems for everything from cataloging to circulation. One such on-line catalog system is the CARL (Colorado Alliance of Research Libraries) system. Various other software became available to librarians, such as spreadsheets and databases for help in library administration and information dissemination. The introduction of CD-ROMs in the late 80ís has changed the way libraries operate. CD-ROMs became available containing databases, software, and information previously only available through print, making the information more accessible. Connections to "outside" databases such as OCLC, DIALOG, and RLIN continued, however, in the early 90's the databases that were previously available on-line became available on CD-ROM, either in parts or in their entirety. Libraries could then gain information through a variety of options. The nineties are giving rise to yet another era in library automation. The use of networks for e-mail, ftp, telnet, Internet, and connections to on-line commercial systems has grown. It is now possible for users to connect to the libraries from their home or office. The world wide web which had it's official start date as April of 1993 is becoming the fastest growing new provider of information. It is also possible, to connect to international library systems and information through the Internet and with ever improving telecommunications. Expert systems and knowledge systems have become available in the 90ís as both software and hardware capabilities have improved. The technology used for the processing of information has grown considerably since the beginnings of the thirty ton computer. With the development of more advanced silicon computer chips, enlarged storage space and faster, increased capacity telecommunication lines, the ability to quickly process, store, send and retrieve information is causing the current information delivery services to flourish. Bibliography Bush, V. (1945).As we may think. Atlantic Monthly. 176(1), 101-8. Duval, B.K. & Main, L. (1992). Automated Library Systems: A Librarians Guide and Teaching Manual. London: Meckler Nelson, N.M., (Ed.) (1990). Library Technology 1970-1990: Shaping the Library of the Future. Research Contributions from the 1990 Computers in Libraries Conference. London: Meckler. Pitkin, G.M. (Ed.) (1991). The Evolution of Library Automation: Management Issues and Future Perspectives. London: Meckler. Title: A Brief History of Library Automation: 1930-1996 f:\12000 essays\technology & computers (295)\A Brief Look at Robotics.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ =================================================================== WIRED HANDS - A Brief Look at Robotics NEWSCIENCE ------------------------------------------------------------------- Two years ago, the Chrysler corporation completely gutted its Windsor, Ontario, car assembly plant and within six weeks had installed an entirely new factory inside the building. It was a marvel of engineering. When it came time to go to work, a whole new work force marched onto the assembly line. There on opening day was a crew of 150 industrial robots. Industrial robots don't look anything like the androids from sci-fi books and movies. They don't act like the evil Daleks or a fusspot C-3P0. If anything, the industrial robots toiling on the Chrysler line resemble elegant swans or baby brontosauruses with their fat, squat bodies, long arched necks and small heads. An industrial robot is essentially a long manipulator arm that holds tools such as welding guns or motorized screwdrivers or grippers for picking up objects. The robots working at Chrysler and in numerous other modern factories are extremely adept at performing highly specialized tasks - one robot may spray paint car parts while another does spots welds while another pours radioactive chemicals. Robots are ideal workers: they never get bored and they work around the clock. What's even more important, they're flexible. By altering its programming you can instruct a robot to take on different tasks. This is largely what sets robots apart from other machines; try as you might you can't make your washing machine do the dishes. Although some critics complain that robots are stealing much-needed jobs away from people, so far they've been given only the dreariest, dirtiest, most soul-destroying work. The word robot is Slav in origin and is related to the words for work and worker. Robots first appeared in a play, Rossum's Universal Robots, written in 1920 by the Czech playwright, Karel Capek. The play tells of an engineer who designs man-like machines that have no human weakness and become immensely popular. However, when the robots are used for war they rebel against their human masters. Though industrial robots do dull, dehumanizing work, they are nevertheless a delight to watch as they crane their long necks, swivel their heads and poke about the area where they work. They satisfy "that vague longing to see the human body reflected in a machine, to see a living function translated into mechanical parts", as one writer has said. Just as much fun are the numerous "personal" robots now on the market, the most popular of which is HERO, manufactured by Heathkit. Looking like a plastic step-stool on wheels, HERO can lift objects with its one clawed arm and utter computer-synthesized speech. There's Hubot, too, which comes with a television screen face, flashing lights and a computer keyboard that pulls out from its stomach. Hubot moves at a pace of 30 cm per second and can function as a burglar alarm and a wake up service. Several years ago, the swank department store Neiman-Marcus sold a robot pet, named Wires. When you boil all the feathers out of the hype, HERO, Hubot, Wires et. al. are really just super toys. You may dream of living like a slothful sultan surrounded by a coterie of metal maids, but any further automation in your home will instead include things like lights that switch on automatically when the natural light dims or carpets with permanent suction systems built into them. One of the earliest attempts at a robot design was a machine, nicknamed Shakey by its inventor because it was so wobbly on its feet. Today, poor Shakey is a rusting pile of metal sitting in the corner of a California laboratory. Robot engineers have since realized that the greater challenge is not in putting together the nuts and bolts, but rather in devising the lists of instructions - the "software - that tell robots what to do". Software has indeed become increasingly sophisticated year by year. The Canadian weather service now employs a program called METEO which translates weather reports from English to French. There are computer programs that diagnose medical ailments and locate valuable ore deposits. Still other computer programs play and win at chess, checkers and go. As a results, robots are undoubtedly getting "smarter". The Diffracto company in Windsor is one of the world's leading designers and makers of machine vision. A robot outfitted with Diffracto "eyes" can find a part, distinguish it from another part and even examine it for flaws. Diffracto is now working on a tomato sorter which examines colour, looking for no-red - i.e. unripe - tomatoes as they roll past its TV camera eye. When an unripe tomato is spotted, a computer directs a robot arm to pick out the pale fruit. Another Diffracto system helps the space shuttle's Canadarm pick up satellites from space. This sensor looks for reflections on a satellites gleaming surface and can determine the position and speed of the satellite as it whirls through the sky. It tells the astronaut when the satellite is in the right position to be snatched up by the space arm. The biggest challenge in robotics today is making software that can help robots find their way around a complex and chaotic world. Seemingly sophisticated tasks such as robots do in the factories can often be relatively easy to program, while the ordinary, everyday things people do - walking, reading a letter, planning a trip to the grocery store - turn out to be incredibly difficult. The day has still to come when a computer program can do anything more than a highly specialized and very orderly task. The trouble with having a robot in the house for example, is that life there is so unpredictable, as it is everywhere else outside the assembly line. In a house, chairs get moved around, there is invariably some clutter on the floor, kids and pets are always running around. Robots work efficiently on the assembly line where there is no variation, but they are not good at improvisation. Robots are disco, not jazz. The irony in having a robot housekeeper is that you would have to keep your house perfectly tidy with every item in the same place all the time so that your metal maid could get around. Many of the computer scientists who are attempting to make robots brighter are said to working in the field of Artificial Intelligence, or AI. These researchers face a huge dilemma because there is no real consensus as to what intelligence is. Many in AI hold the view that the human mind works according to a set of formal rules. They believe that the mind is a clockwork mechanism and that human judgement is simply calculation. Once these formal rules of thought can be discovered, they will simply be applied to machines. On the other hand, there are those critics of AI who contend that thought is intuition, insight, inspiration. Human consciousness is a stream in which ideas bubble up from the bottom or jump into the air like fish. This debate over intelligence and mind is, of course, one that has gone on for thousands of years. Perhaps the outcome of the "robolution" will be to make us that much wiser. f:\12000 essays\technology & computers (295)\A Computer Science Report format.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ COMPUTER SCIENCE REPORT QUESTIONAIRE 1. DO YOU HAVE A COMPUTERIZED SYSTEM? IF NOT WHAT WOULD YOU LIKE COMPUTERIZED? 2. WHAT TYPE COMPUTER SYSTEM DO YOU HAVE? 3. ARE THERE ANY SETBACKS IN USING THIS SYSTEM? 4. IS THIS SYSTEM DOING ALL THAT IS REQUIRED TO BE DONE? 5. WHAT ARE THE ADVANTAGES? 6. WHAT ARE THE DISADVANTAGES? 7. ARE THERE ANY IMPROVEMENTS YOU WOULD LIKE IN YOUR COMPUTER SYSTEM? 8. IF SO WHAT IMPROVEMENTS WOULD YOU RECOMMEND? ANSWERS FROM QUESTIONAIRE THE CIFTON DUPINY COMMUNITY COLLEGE 1. NO WE HAVE A COMPUTERIZED SYSTEM. ALL ASPECTS OF THE SCHOOL'S RECORD. 2. WE DO MOST OF OUR WORK IN WORDPERFECT WE HAVE NO COMPUTER PROGRAMMES. 3. YES THERE ARE MANY SETBACKS, THE PERSON USING THE COMPUTER HAS TO FIGURE OUT EVERYTHING , THIS LEADS TO A VERY HEAVY WORK LOAD AND LOSS OF TIME. 4. NO, THIS SYSTEM IS NOT DOING WHAT IS REQUIRED. 5. THE ONLY ADVANTAGE IS THAT WE CAN STORE OUR WORK ON THE COMPUTER. 6. DISADVANTAGES OF OUR SYSTEM ARE SLOW, TIME CONSUMING, INEFFICIENT AMONG OTHERS. 7. YES WE WOULD LIKE ALOT OF IMPROVEMENTS IN THE SYSTEM. 8.THE IMPROVEMENTS I WOULD RECOMMEND IS A COMPUTER PROGRAM TO REGISTER STUDENTS,TO CHECK CLASS SCHEDULES, TO STORE STUDENT FILES, CHECK ON STUDENTS MARKS, THE ARRANGEMENTS OF TIMETABLES AND TEACHER'S SCHEDULE, CLASS USAGE, TO DETERMINE THE PROMOTION OF STUDENTS AND THE RECORD OF THE SCHOOL'S FINANCES. f:\12000 essays\technology & computers (295)\A Hacker.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A Hacker A hacker is a person obsessed with computers. At the heart of the obsession is a drive master the computer. The classic hacker was simply a compulsive programmer. It is only recently that the term hacker became associated with the computerized vandalism. Great description of Hackers: Bright young men of disheveled apperance,Often with sunken, glowing eyes.Seen sitting at computer consoles, their arms tense and waitingTo fire their fingers which are already posed to strike at the buttons and keys on which their attention seems to dice.They work until they nearly drop,twenty or thirty hours at a time if possible.They sleep on cots near the computer,but only a few hours-then back to the console, or printouts.Their crumpled clothes, their unwashed, unsheven faces, and uncombed hair, testify that they are oblivious to their bodies and to the word in which they move. They exist, at least when so gaged, only through and for the computers. The majority of hackers are mostly young men, often teenagers who have found within the computer world, something into which they can mold their desires. Another definition -a person totally engrossed in computer programming and computer technology. In the 1980s, with the advent of personal computers and dial-up computer networks, hacker acquired a pejorative connotation, often referring to someone who secretively invades others' computers, inspecting or tampering with the programs or data stored on them. (More accurately, though, such a person would be called a "cracker.") Hacker also means someone who, beyond mere programming, likes to take apart operating systems and programs to see what makes them tick f:\12000 essays\technology & computers (295)\A Long Way From Univac.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Adv. Eng. 9 Computers A Long Way From Univac Can you imagine a world without computers? You most probably interact with some form of a computer every day of your life. Computers are the most important advancement our society has ever seen. They have an interesting history, many interesting inner components, they are used nearly everywhere, and continue to advance incredibly fast. Because the field of computers is so broad, this paper will focus mainly on personal computers. Although computers have been evolving for quite some time, they really didnıt gain popularity until the introduction of the personal computer. In 1977, Steve Jobs, co-founder of the Apple Computer Company, unveiled what is generally considered to be the first personal computer, the Apple II. This computer was introduced on April 16, 1977, at the First West Coast Computer Faire, in San Francisco. In 1981, the International Business Machines Company introduced the first IBM PC. Unlike Apple, IBM used a policy of open architecture for their computer. They bought all of their components from the lowest bidder, such as the 8086 and 8088 microprocessor chips, made by a Intel, a Hillsboro, Oregon company. When IBMıs computerıs design had been finalized, they shared most of the inner workings of the computer with everyone. IBM hoped that this would encourage companies to manufacture computers that were compatible with theirs, and that in turn, would cause software companies to create operating systems, or OS, and other programs for the ³IBM Compatible² line of computers. One of the computer manufacturers was a Texas company called Compaq. A company called Dell Computers was the first ³factory direct² computer seller. A small Redmond, Washington company called Microsoft made a large amount of software for the ³IBM Compatible² line of computers. This open architecture policy of IBM was not without itıs flaws, however. IBM lost some business to the ³clones² who could offer more speed, more memory, or a smaller price tag. IBM had considered this an acceptable loss. One of the few components of the IBM PC that was kept from the clone manufacturers was the Basic Input Output System, or BIOS. This program, which was usually etched permanently on a chip, controlled the interactions between the internal hard and floppy drives, the external drives, printers, and monitors, etc. Clone manufactures had to make their own versions of an input output system. Some manufacturers copied the IBM BIOS exactly, such as Eagle Computers, and Corona Data Systems. This is one adverse affect that IBM had not thought of. However, all of IBMıs copyright violation lawsuits against these companies ended in IBMıs favor. IBM has continued to grow to this day, however, the clone manufacturers make far more personal computers than IBM, while IBM makes more business machines, and the Power PC microprocessor, used in Macintosh computers. IBM clone are now made by Packard Bell, Sony, Acer, Gateway 2000, and more. The clones have continued to use software and operating systems made by Microsoft, including: DOS (Disk Operating System), Windows, Windows 95, and Windows NT. The clones also primarily use microprocessors manufactured by Intel, including the 8086, 8088, 80286, 80386, 80486, Pentium and Pentium Pro, which offer speeds over 200 megahertz, and will be even faster in the near future (Silver 7-28). Apple took a somewhat different course during this period. Not willing to enter the IBM clone manufacturing market, Apple continued to make their own kind of computers. They made minor improvements on the Apple II line, but eventually decided they needed to make a new type of computer. They first introduced the Apple III in September of 1980. It was a dismal failure. The first buyers encountered numerous system errors and failures, because of a poor OS. Besides that, it was poorly manufactured, with improperly fitting circuitry, loose wires and screws, etc. The later released Apple III+ did poorly because of itıs brotherıs poor debut. The next big release was the Lisa in January of 1983. It was the first personal computer with a mouse, and nice graphic capabilities. Experiments showed that it was 20 times as easy to use as the IBM PC, and it drew enormous praise from computer magazines. It had flaws too, however. It strained the power of the aging Motorola 68000 microprocessor, so it lost in speed tests to the IBM PC. It also came with a $10,000 price tag, over twice as much as most IBM clones. The Lisa failed, not as catastrophically as the Apple III, but failed, nevertheless. Apple had but one more ace up their sleeve, and they released it in January of 1984. They called it the Macintosh, and it was very popular. Apple still uses the Macintosh series of computers to this day. In 1995, Apple finally allowed other companies to use their OS, and manufacture clones. Some clone manufacturers include: Power Computing, Umax, Radius, and Motorola. Unlike IBM, Apple still sells more computers than itıs clones, but Power Computing is steadily gaining in sales. Macintoshes and Mac clones use System 6, System 7, System 7.1, System 7.5, and System 7.6, all made by Apple. Macintoshes and their clones use microprocessors manufactured by Motorola, including, 68000, 68881, 68020, 68030, 68040, and the Power PC 601, 603, and 604, made by Motorola and IBM, with speeds up to 225 megahertz, and a 603e, available in January of 1997, operating at 300 megahertz (Hassig 45-68) Computers have many interesting components, including: motherboards, microprocessors, FPUs (Floating Point Unit), hard disk drives, floppy disk drives (5.25² and 3.5²), CD ROM drives (Compact Disc Read Only Memory), cartridge drives, ROM chips (Read Only Memory), RAM (Random Access Memory), VRAM (Video Random Access Memory), NuBus or PCI expansion cards (Peripheral Complement Interface), monitors, keyboards, mice, speakers, microphones, printers, network systems, and modems. The motherboard is what the microprocessor, FPU, ROM, RAM, VRAM and all the circuitry are attached to. The microprocessor, also called a CPU (Central Processing Unit) and FPU are what everything goes through, and tell what to do with data. Most CPUs operate from 2.5 megahertz (MHz, millions of cycles per second) to 300 MHz. The hard disk holds large amounts of data for a long time. Most hard disks can hold from 1 megabyte (MB) to 10 gigabytes (GB). *NOTE: (1 GB is 1,024 MB, 1 MB is 1,024 kilobytes (K), 1 K is 1,024 bytes, 1 byte is 8 bits, and a bit is an on/off code (binary code uses 0 for off and 1 for on), therefore a 10 GB hard disk can have 8,589,934,592 bits!). Floppy disks are for putting small amounts data on, and being able to take them with you. The old 5.25² disks held a few K of data, while the new 3.5² type holds 800 K or 1.4 MB. CD ROMs are relatively new. They have very fine lines on their surface read by a laser, and can usually hold 650 MB of data (which is unchangeable). CD ROM drives range in disc reading time from 1X (real time, 150 K/sec) to 15X (2.2 MB/sec). Cartridges store large amounts of data and are removable, like floppy disks. They can store up to 1 GB, and come in all shapes and sizes, each type with a different drive. ROM is unchangeable data soldered on the motherboard. RAM is memory the computer uses for immediate access, such as open applications. Everything on the RAM is lost when the computer is shut down. VRAM is used to display a higher resolution or greater color depth on the monitor. 512 K or 1 MB is the standard amount on most computers, and 8 MB is the most available. The resolution ranges from 400 by 300 pixels to 1,920 by 1,440 pixels. The color depth ranges from 1 bit (black and white) to 36 bit (68,719,476,470 colors). NuBus and PCI expansion cards add special features to computers, such as receiving TV transmissions. Monitors display images given to them by VRAM. They range in size from 9 to 21 inches diagonally. Keyboards input data into the computer. Mice have a track ball that moves around inside, causing a cursor to move across the screen. Speakers amplify the sound output of a computer. Microphones allow sounds to be recorded on a computer. Printers allow computers to put data on paper. Network systems allow data to be easily transmitted from one computer to another. Modems allow data to be transmitted through telephone wires. They have variable speeds from 300 bytes per second (BPS) to 57,600 BPS (Rizzo 5-21). Today, computers are utilized in just about every field imaginable. A caution for the future of computers is that they could go berserk or, if they had a working artificial intelligence, they could make mankind completely obsolete. Computers have evolved, and will continue to evolve faster than any tech technology to date. Therefore, it is impossible to fathom where computers will be in a thousand, or even a hundred years. One thing, however, is certain: computers are the most important advancement our society has ever seen. BIBLIOGRAPHY Rizzo, John and K. Daniel Clarke. How Macs Work. New York: Ziff-Davis Press, 1996. Hassig, Lee, Margery A. duMond, Esther Ferrington, et al. The Personal Computer. Richmond: Time Life, 1989. Silver, Gerald A. and Myrna L. Silver. Computers and Information Processing. New York: HarperCollins Publishers, 1993. f:\12000 essays\technology & computers (295)\A Multifacited Interface.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -------------------------------------------------------- Microsoft Windows 95 README for Microsoft Windows August l995 -------------------------------------------------------- (c) Copyright Microsoft Corporation, 1995 ------------------------ HOW TO USE THIS DOCUMENT ------------------------ To view Readme.txt on screen in Notepad, maximize the Notepad window. To print Readme.txt, open it in Notepad or another word processor, then use the Print command on the File menu. -------- CONTENTS -------- IF YOU HAVEN'T INSTALLED WINDOWS 95 LIST OF WINDOWS 95 README FILES HOW TO READ README FILES UNINSTALLING WINDOWS 95 -------- IF YOU HAVEN'T INSTALLED WINDOWS 95 =================================== Additional setup information is available in Setup.txt. You can view Setup.txt using Notepad with Windows 3.1. You can find the file on Windows 95 installation disk 1. If you purchased Windows 95 on a CD-ROM, you can find Setup.txt in the \Win95 directory. LIST OF WINDOWS 95 README FILES =============================== In addition to Readme.txt, Windows 95 provides the following readme files: Config.txt Contains syntax information for commands you use with your Config.sys file. Display.txt Provides information about how to configure and correct problems for available drivers and how to obtain additional display drivers. Exchange.txt Provides information to help you set up and run Microsoft Exchange. Extra.txt Provides information about where to find additional Windows 95 files, such as updates and drivers, in addition to files available only in the CD-ROM version of Windows 95. Faq.txt Answers frequently asked questions about Windows 95. General.txt Provides information about startup problems, the programs that come with Windows 95, disk tools, disks and CDs, drivers, removable media, Microsoft FAX, and pen services. This file also contains last-minute information received too late to include in the other readme files. For example, if you have a question about a printer, it would be helpful to look in General.txt as well as in Printers.txt. Hardware.txt Provides information about known problems and workarounds for hardware. You may also need to refer to Printers.txt or Mouse.txt for specific problems. Internet.txt Provides information to help you connect to the Internet if you haven't done so already. Also provides information about where to download Microsoft's new Web browser, Internet Explorer. Mouse.txt Provides information about known problems and workarounds specifically for mouse and keyboard problems. Msdosdrv.txt Contains syntax information for MS-DOS device drivers. For additional help on MS-DOS commands, see Config.txt. You can also use command-line help at the command prompt by typing /? following the command name. Msn.txt Provides information to help you connect to The Microsoft Network. Network.txt Provides information about installing and running network servers. Printers.txt Provides information about known problems and workarounds for printers. Programs.txt Provides information and workarounds for running some specific Windows-based and MS-DOS-based programs with Windows 95. Support.txt Provides Information about how to get additional support for Windows 95. Tips.txt Contains an assortment of tips and tricks for using Windows 95, most of which are not documented in online Help or the printed book. HOW TO READ README FILES ======================== When you install Windows 95, all the readme files are copied to the \Windows directory. To open a readme file after you install Windows 95: 1. Click the Start menu. 2. Click Run. 3. Type the name of the readme file. Even if you haven't installed Windows 95 yet, you can still open a readme file. To open a readme file before you install Windows 95: If you purchased Windows 95 on floppy disks: -------------------------------------------- 1. Insert Disk 1 into drive A (or whatever drive you prefer). 2. At the MS-DOS command prompt, type the following: a:extract.exe /a /l c:\windows win95_02.cab filename.txt For example, if you want to open General.txt, you would type: a:extract.exe /a /l c:\windows win95_02.cab general.txt 3. Change to the \Windows directory. 4. At the command prompt, type the following: edit filename.txt If you purchased Windows 95 on a CD-ROM: ---------------------------------------- 1. Insert the CD into your CD-ROM drive (drive x in this example). 2. Change to the \Win95 directory on your CD-ROM drive. 2. At the MS-DOS command prompt, type the following: extract.exe /a /l c:\windows win95_02.cab filename.txt For example, if you want to open General.txt, you would type: extract.exe /a /l c:\windows win95_02.cab general.txt 3. Change to the Windows directory on your C drive. 4. At the command prompt, type the following: edit filename.txt UNINSTALLING WINDOWS 95 ======================= During Setup, you have the option of saving your system files so that you can uninstall Windows 95 later. If you want to be able to uninstall Windows 95 later, choose Yes. Setup will save your system files in a hidden, compressed file. If you don't need to be able to uninstall Windows 95 later, choose No. You will not see this Setup option if: - You are upgrading over an earlier version of Windows 95. - You are installing to a new directory. - You are running a version of MS-DOS earlier than 5.0. NOTE:The uninstall files must be saved on a local hard drive. You can't save them to a network drive or a floppy disk. If you have multiple local drives, you will be able to select the one you want to save the uninstall information on. To uninstall Windows 95 and completely restore your computer to its previous versions of MS-DOS and Windows 3.x, carry out the following procedure: 1. Click the Start button, point to Settings, and then click Control Panel. 2. Double-click the Add/Remove Programs icon. 3. On the Install/Uninstall tab, click Windows 95, and then click Remove. Or, if you are having problems starting Windows 95, use your startup disk to start your computer, and then run UNINSTAL from the startup disk. NOTE: The uninstall program needs to shut down Windows 95. If there is a problem with this on your computer, restart your computer and press F8 when you see the message "Starting Windows 95." Then choose Command Prompt Only, and run UNINSTAL from the command prompt. If Windows 95 is running and you want to remove the uninstall files to free up 6 to 9 MB of disk space, carry out the following procedure: 1. Click the Start button, point to Settings, and then click Control Panel. 2. Double-click the Add/Remove Programs icon. 3. On the Install/Uninstall tab, click Old Windows 3.x/MS-DOS System Files, and then click Remove. You will no longer be able to uninstall Windows 95. f:\12000 essays\technology & computers (295)\A short history of computers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Whether you know it or not you depend on computers for almost every thing you do in modern day life. From the second you get up in the morning to the second you go to sleep computer are tied into what you do and use in some way. It is tied in to you life in the most obvious and obscure ways. Take for example you wake up in the morning usually to a digital alarm clock. You start you car it uses computers the second you turn the key (General Motors is the largest buyers of computer components in the world). You pick up the phone it uses computers. No mater how hard you try you can get away from them you can't. It is inevitable. Many people think of computers as a new invention, and in reality it is very old. It is about 2000 years old .1 The first computer was the abacus. This invention was constructed of wood, two wires, and beads. It was a wooden rack with the two wires strung across it horizontally and the beads were strung across the wires. This was used for normal arithmetic uses. These type of computers are considered analog computers. Another analog computer was the circular slide rule. This was invented in 1621 by William Oughtred who was an English mathematician. This slid ruler was a mechanical device made of two rules, one sliding inside the other, and marked with many number scales. This slide ruler could do such calculations as division, multiplication, roots, and logarithms. Soon after came some more advanced computers. In 1642 came Blaise Pascal's computer, the Pascaline. It was considered to be the first automatic calculator. It consisted of gears and interlocking cogs. It was so that you entered the numbers with dials. It was originally made for his father, a tax collector.2 Then he went on to build 50 more of these Pascaline's, but clerks would not uses them.3 They did this in fear that they would loose their jobs.4 Soon after there were many similar inventions. There was the Leibniz wheel that was invented by Gottfried Leibniz. It got its name because of the way it was designed with a cylinder with stepped teeth. 5 This did the same functions of the other computers of its time. Computers, such as the Leibniz wheel and the Pascaline, were not used widely until the invention made by Thomas of Colmar (A.K.A Charles Xavier Thomas).6 It was the first successful mechanical calculator that could do all the normal arithmetic functions. This type of calculator was improved by many other inventors so it could do a number of many other things by 1890. The improvements were they could collect partial results, a memory function (could store information), and output information to a printer. These improvement were made for commercial uses mainly, and also required manual installation. Around 1812 in Cambridge, England, new advancements in computers was made by Charles Babbage. His idea was that long calculations could be done in a series of steps the were repeated over many times.7 Ten years later in 1822 he had a working model and in 1823 he had fabrication of his invention. He had called his invention the Difference Engine. In 1833 he had stopped working on his Difference Engine because he had another idea. It was to Build a Analytical Engine. This would have been a the first digital computer that would be full program controlled. His invention was to do all the general- purposes of modern computers. This computer was to use punch cards for storage, steam power, and operated by one person.8 This computer was never finished for many reasons. Some of the reasons were not having precision mechanics and could solve problems not needed to be solved at that time.9 After Babbage's computer people lost interest in this type of inventions.10 Eventually inventions afterwards would cause a demand for calculations capability that computers like Babbage's would capable of doing. In 1890 an new era of business computing had evolved. This was a development in punch card use to make a step towards automated computing, which was first used in 1890 by Herman Holler. Because of this human error was reduced dramatically.11 Punch Cards could hold 80 charters per card and the machines could process about 50 -220 cards a minuet. This was a means of easily accessible me memory of unlimited size.12 In 1896 Hollerith had founded his company Tabulating Machine Company, but later in 1924 after several mergers and take-overs International Business Machines (IBM) was formed. An invention during this time ,1906, would influence the way that computers were built in the future, it is the first vacuum, and a paper was wrote by Alan Turingthat described a hypothetical digital computer.13 In 1939 there was the first true digital computer. It was called the ABC, and was designed by Dr. John Astanasoff. In 1942 John O. Eckert, John W. Mauchly, and associates had decided to build a high speed computer. The computer they were to build would become to be known as the ENIAC (Electrical Numerical Integration And Calculator). The reason for building this was there was a demand for high computer capacity at the beginning of World War two. The ENIAC after being built would take up 1,800 square feet of floor space.14 It would consist of 18,000 vacuum tubes, and would take up 180,000 watts of power.15 The ENIAC was rated to be 1000 times faster than any other previous computer. The ENIAC was accepted as the first successful high speed computer, and was used from 1946 to 1955.16 Around the same time there was a new computer built was more popular. It was more popular because it not only had the ability to do calculations but it could also could do the dissension make power of the human brain. When it was finished in 1950 it became the fastest computer in the world.17 It was built by the National Bureau of standards on the campus of UCLA. It was names the National Bureau of Standards Western Automatic Computer or the SWAC. It could be said that the SWAC set the standards for computers for later up to present times.18 It was because the had all the same primary units. It had a storage device, a internal clock, an input output device, and arithmetic logic unit that consisting of a control and arithmetic unit. These computers were considered first generation computers (1942 - 1958). In 1948 John Bardeen, Walter Brattain, and William Schockley of Bell labs file for the firs patent on the transistor.19 This invention would foundation for second generation computers (1958 - 1964). Computers of the second generation were smaller(about the size of a piano now) and much more quicker because of the new inventions of its time. Computers used the much smaller transistor over the bulky vacuum tubes. Another invention which influenced second generation computers and every generation after it was the discovery of magnetic core memory. Now magnetic tapes and disks were used to store programs instead of being stored in the computer. This way the computer could be used for many operations without totally being reprogrammed or rewired to do another task. All you had to do was pop in another disk. The third generation(1964 - 1970) was when computers were commercialized then ever before. This was because they were getting smaller and more dependable.20 Also the cost went down and power requirements were less.21 This was probably because of the invention of the silicon semiconductor. These computers were used in mainly medical places and libraries for keep track of records and various other reasons. These computer of the third generation were the first micro computers. The generation of computers we are in now is the forth generation it started in 1970. The forth generation really started with an idea by Ted Hoff, an employ of Intel, that all the processing units of a computer could be placed on one single chip. This Idea that he had was not bought by many people.22 I believe that with out this idea upgradeable computers would never have been designed. Today, every thing has a microprocessor built into it.23 The microcomputer was changed forever in 1976 when Steve Jobs and Steve Wozniak had sold a Volkswagen and a calculator for $1300 to build the first Apple.24 The work the did was in their garage. They Had founded their company 1983, and had successfully mad the fortune 500 list.25 Two years before Apple was founded IBM had announced the release of the IBM PC. Over the next 18 months the IBM would become an industry standard.26 From the 1980 on there was a was a large demand for microcomputers Suck as the IBM PC and Apple not only in industry but in the homes of many people. Many other computers appeared during the 80's. Some were the Commodore, Tandy, Atari, and game systems such as the nintendo and many others. There was aslo a large demand for computer games for the home PC. Because of these many demands many companies were getting very competitive. They were pushing for the faster better computer. Buy the late 80's because of this demand microprocessors could handle 32 bits of data at a time pushing over 4 million instructions processed a second.27 It seem as if over time computers have evolved in to totally different machines but if you put it in to perspective they are also much alike. But on the other hand With almost every business and many families today are in demand of better and newer computers it seems that if you buy a new computer today industry had made it obsolete before you it. This is probably because the better you make a computer and quicker it can do calculations the quicker it can help you in designing an new computer that is even faster. It is a domino effect that was started back 2000 years ago and will probably never end. Who knows what's in store for the future or you could say the fifth generation of computers. 1. Meyers, Jeremy. A Short History of the Computer.Compatible, http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 1 2. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 3. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 1 4. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 1 5. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 1 6. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 1 7. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 2 8. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 3 9. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 3 10. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 3 11. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 3 12. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 2 13. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 4 14. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 4 15. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 4 16. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 5 17. Rutland, David. Why Computers Are Computers. New York: Waren Publishers, 1996 p. 2 18. Rutland, David. Why Computers Are Computers. New York: Waren Publishers, 1996 p. 2 19. Polsson, Ken. Chronology of Events in the History of Micro Computer. http://www.islandnet.com/kpolsson/comphist.htm, IBM Compatible, Internet. 1995-96 Ken.polsson@bbc.org . pg. 3 20. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6 21. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6 22. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6 23. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6 24. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6 25. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6 26. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6 27. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 8 f:\12000 essays\technology & computers (295)\AD and DA Convertors and Display Devices.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ National Diploma In Engineering Data Communications Electronics B NIII Assignment no. 2 A/D and D/A Convertors and Display Devices Weighting 20% Name: Malcolm Brown Class: NDD2 Tutor: Ken Hughs Contents Page Task 3 A/D and D/A Convertors 4 Analogue and Digital Signals 4 Analogue / Digital Conversions 5 Analogue to Digital Convertors 6 Digital To Analogue Converter 8 Glossary Of Terms 10 Visual Display Devices 11 Seven-Segment Displays 14 Dot Matrix Displays 16 Bibliology 18 Task A/D and D/A Convertors Explain two methods of converting analog signals to digital signals and compare them. Explain one method of digital to analog conversion. Choose two A/D convertor devices from the catalogue and list their characteristics, performance, cost, applications etc. Display Devices Describe how LED and LCD display devices operate - ie explain the principle behind their operation. Describe the features of the 7-segment, star-burst and dot matrix displays. Choose some devices from the catalogues and describe them. You are required to produce a written report on your work. The report should be in standard report format and comprise of a front page with title, contents page summary, introduction, main body of the report describing the task and how you met the requirements of the task, circuit diagrams etc. and conclusions. Appendices may be placed in the report if necessary. The report should be word processed and presented in a plastic folder. Your name class and subject should be clearly visible. A/D and D/A Convertors Analogue and Digital Signals Analogue Signals - Signals whose amplitude and/or frequency vary continuously eg. sound. Fig 1.1 illustrates an analogue signal:- Fig 1.1 Illustration of an analogue signal Digital Signals - Signals which are not continous in nature but consist of discrete pulses of voltage or current known as bits which represent the information to be processed. Digital voltages can vary only in discrete steps. Normally only two levels are used ( 0 and 1 ).Fig 1.2 illustrates a digital signal. Fig 1.2 Illustration of a digital signal Analogue / Digital Conversions In todays electronic system it is often necessary that the overall system may not be entirely analogue or entirely digital in nature. Thus a digital system may be controlled by input signals which are the amplified analogue outputs, perhaps of some measuring transducer (termister, LDR). Similarly a digital system output may be required to control the measured analogue system via analogue control values.Interfacing is therefore required between the analogue and digital subsystems and it is necessary to be able to convert an analogue signal into a digital equivalent signal and visa versa. A/D and D/A convertors are therefore used. An analogue signal cannot be represented exactly by a digital signal and must be sampled at sufficient intervals for all relevant information to be retained. Sampling theory states that at least two samples must be obtained per period of the highest frequency component. If the highest frequency component is fs then the period of the sampling signal is given by:- T < 1/2 fs Fig. 2 Sample and Hold Fig.2 shows a basic sample and hold circuit. The capacitor C is used as a store or memory to hold the value of the sample. It is connected to the analogue signal input via the resistor R. The time constant CR is chosen to be sufficiently short so that the capacitor voltage can follow the required analogue signal variations. At the instant that the sample is to be taken switch S is changed into the hold position and the sample voltage is available to the succeeding analogue to digital convertor. The main disadvantage with this simple circuit lies in the voltage drift which occurs in the capacitor during the hold period. This is mainly due to the load placed upon the capacitor by the following circuitry and can be minimized by using a larger capacitor or by the use of a high impedance buffer amplifier. Analogue to Digital Convertors The two A/D convertors described below are known as the Ramp and Successive Approximation types. Ramp A/D Convertor- analogue input Output Control sample 0 if Va > Vc Logic Va 1 if Va < Vc Count up if input = 0 Count down if input = 1 VRef n-bit Counter Cloc n-bit D/A Convertor n bit parallel digital output Fig 3.1 Block Diagram of Ramp A/D Convertor Fig 3.1 shows the block diagram for a Staircase Ramp analogue to digital convertor. This diagram consists of a clock pulse generator which sends clock pulses into the n-bit counter. The counter produces a parallel digital output which is converted into its analogue equivalent by the D/A convertor. The output of the D/A convertor is compared with the analogue input sample by the comparator. The output of the comparator is then fed into the control logic which in turn controls the counter. The circuit operates as follows, the counter is emptied by resetting all bits to zero before a conversion is started. When the new analogue sample is present the control logic starts the count, ie clock pulses are fed into the counter. The counter digital output thus increases bit by bit at the clock frequency. The output from the digital to analogue convertor is a linear ramp made up of equal incremental steps. The count continues until the generated staircase ramp exceeds the value of the analogue sample voltage, when the capacitor output goes to logic 1 and stops the count.The counter output is at this time the digital equivalent of the analogue voltage. Successive Approximation A/D Convertor Shift Register n-bit digital output D/A Convertor Fig 3.2 Block Diagram for a Successive Approximation A/D Convertor Fig 3.2 shows the block diagram for a successive approximation A/D convertor. The diagram consists of a shift register to store the digital output connected to a D/A convertor whose output is compared with the analogue input sample by use of a comparator. The output of the comparator is then fed is then fed into the shift register. The circuit operates by repeatedly comparing the analogue signal voltage with a number of approximate voltages which are generated at the D/A convertor. Initially the shift register is cleared and then the D/A convertor output is zero. The first clock pulse applies the MSB to the register to the D/A convertor. The output of the D/A convertor is then one-half of its full scale voltage range (FSR). If the analogue voltage is greater than FSR/2 the MSB is retained (stored by a latch), if it is less than the FSR/2 the MSB is lost. The next clock pulse applies the next lower MSB to the D/A convertor producing a D/A convertor output of FSR/4 . If the MSB has been retained the total D/A convertor output voltage is now 3FSR/4. If the MSB has been lost the output of the D/A convertor is now FSR/4. In either case the analogue and D/A convertor voltages are again compared. If the analogue voltage is the larger of the two the second MSB is retained (latched), if not it is not the MSB is lost. A succession of similar triats are carried out and after each the shift register output bit is either retained by a latch or is not. Once n+1 clock pulses have been supplied to the register the conversion has been completed and the register output gives the digital word that represents the analogue input sample voltage. The characterics of two A/D convertors are shown in Appendices 1 +2 Digital To Analogue Converter A typical 4-bit D/A converter is shown in fig 4.1. The circuit uses precision resistors that are weighted in digital progression ie 1,2,3,4. Vref is an accurate reference voltage. The circuit has 4 inputs (d0,d1,d2,d3) and 1 output Vout. When a bit is high it produces enough base current to saturate its transistor this acts as a closed switch. When a bit is low the transistor is cut off (open switch). By saturating and cutting off the transistor (opening and closing switch ) 16 different output currents from 0 to 1.875 Vref/R can be produced. If for example Vref =5V and R=5KW then the total output current varies from 0 to 1.875 mA as shown in table 1. Fig 4.1 D/A converter using switching transistors D3 D2 D1 D0 Output current mA Fraction of maximum 0 0 0 0 0 0 0 0 0 1 0.125 1/15 0 0 1 0 0.25 2/15 0 0 1 1 0.375 3/15 0 1 0 0 0.5 4/.15 0 1 0 1 0.625 5/15 0 1 1 0 0.75 6/15 0 1 1 1 0.875 7/15 1 0 0 0 1 8/15 1 0 0 1 1.125 9/15 1 0 1 0 1.25 10/15 1 0 1 1 1.375 11/15 1 1 0 0 1.5 12/15 1 1 0 1 1.625 13/15 1 1 1 0 1.75 14/15 1 1 1 1 1.875 15/15 Table 1 Output Current By sending out a nibble to D3 - D0 in ascending levels ie. 0000 , 0001 , 0011 etc. the output current of the D/A converter is shown in fig 4.2. The output moves one step higher until reaching the maximum current. Then the cycle repeats. If all resistors are exact and all transistors matched all steps are identical in size. Fig 4.2 Output current of D/A convertor Glossary Of Terms Resolution - One way to measure the quality of a D/A converter is by its resolution. The resolution is the ratio of the LSB increment to the maximum output. Resolution can be calculated by the formula.- Resolution = 1 / 2n - 1 where n = number of bits Percentage resolution = 1 / resolution * 100% The greater the number of bits the better the resolution table 2 is a summary of the resolution for converters with 4 to 18 bits. Bit Resolution Percent 4 1 part in 15 6.67 6 1 part in 63 1.54 8 1 part in 255 0.392 10 1 part in 1,023 0.0978 12 1 part in 4095 0.0244 14 1 part in 16,383 0.0061 16 1 part in 65,535 0.00153 18 1 part in 262,143 0.000381 Table 2 Resolution table Accuracy - The conformance of a measured value with its true value; the maximum error of a device such as a data converter from the true value. Absolute Accuracy - The worst case input to output error of a data converter referred to the NDS (National Bureau Of Standards) , standard volt. Relative Accuracy - The worst case input to output error of a data converter as a percent of full scale referred to the converter reference. The error consists of offset gain and linearity components. Conversion Rate - The number of repetitive A/D or D/A conversions per second for a full scale change to specified resolution and linearity. Visual Display Devices Visual displays are often employed in electronic equipment to indicate the numerical value of some quantity eg. digital watches, electronic calculators and digital voltmeters. A variety of display devices are available but the most common are the Light Emitting Diode (LED) and the Liquid Crystal Display (LCD). Light Emitting Diode (LED)- The majority of Light Emitting Diodes are either gallium phosphide (GaP) or gallium-arsenide-phosphide (GaAsP) devices. An LED radiates energy in the visible part of the electromagnetic spectrum when the forward bias voltage applied across the diode exceeds the voltage that turns it ON. This voltage depends upon the type of LED and the light it emits. Table 3 displays information on different LED types and fig.5.1 the electronic symbol for a LED. Colour Material Wavelength (peak radiation) nm Forward voltage at 10mA current (V) Red GaAsp 650 1.6 Green GaP 565 2.1 Yellow GaAsP 590 2.0 Orange GaAsP 625 1.8 Blue SiC 480 3.0 Table 3 LED Types Blue LEDs are a fairly recent development and these devices use silicon carbide (SiC) Fig 5.1 LED Symbol The current flowing in a LED must not be allowed to exceed a safe figure, generally 20-60 mA, and if necessary a resistor of suitable value must be connected in series with the diode to limit the current. Often a LED is connected between one of the outputs of a TTL device and either earth or +5V depending upon when the LED is required to glow visibly. If for example, a LED is expected to glow when the output to which it is connected is low, the device should be connected as in fig 5.2 . Suppose the low voltage to be 0.4V and the sink current to be 16mA. Then if the LED voltage drop is 1.6V and the value of the series resistor will be ( 5 - 1.6 - 0.4 ) / ( 16 * 10 -3 ) = 188 W When the output of the device is high (@ 4V), no current flows and the LED remains dark. When the LED is to glow to indicate the high output condition, the circuit shown in fig.5.3 must be used. R1 = ( 5 - 1.6 ) / (16 * 10-3 ) = 213 W When a LED is reverse biased it acts very much like a zenar diode with a low breakdown voltage (@ 4 V ). Light Emitting Diodes are commonly used because they are cheap, reliable, easy to interface and are readily available from a number of sources. Their main disadvantage is that their luminous efficiency is low, typically 1.5 lumens/watt. Fig 5.2 Fig 5.3 The characteristics of a LED Display is displayed in Appendix 3 Liquid Crystal Displays (LDR)- A solid crystal is a material in which the molecules are arranged in a rigid lattice structure. If the temperture of the material is increased above it melting point, the liquid that is formed will tend to retain much of the orderly molecular structure. The material is then said to be in its liquid crystalline phase. There are two classes of liquid crystal known, respectively as nematic and smetic but only the former is used for display devices. A nematic liquid crystal does not radiate light but instead it interferes with the passage of light whenever it is under the influence of an applied electric field. There are two ways in which the optical properties of a crystal can be influenced by an electric field. These are dynamic scattering and twisted nematic. The former was commonly employed in the past but now its application is mainly resisted to large-sized displays. The commonly met liquid crystal displays, eg. those in digital watches and hand calculators, ars all of the twisted nematic type. Incident Light Transmitted Light Fig 6 (B) Incident light Fig 6 (A) V Fig 6 (A) A liquid crystal cell (B) and (C) operation of a liquid crystal cell No transmitted light Fig 6 (C) The construction of a Liquid Crystal cell is shown in fig. 6 (A) . A layer of a liquid crystal is placed in between two glass plates that have transparent metal film electrodes deposited on to their interior faces. A reflective surface, or mirror, is situated on the outer side of the lower glass plate (it may be deposited on its surface) . The conductive material is generally either tin oxide or a tin oxide or a tin oxide/indium oxide mixture and it will transmit light with about 90% efficiency. The incident light upon the upper glass plate is polarized in such a way that, if there is zero electric field between the plates, the light is able to pass right through and arrive at the reflective surface. Here it is reflected back and the reflected light travels through the cell and emerges from the upper plate (fig.6 (B). If a voltage is applied across the plates (fig.6 (C) the polarization of the light entering the cell is altered and it is no longer able to propagate as far as the reflective surface. Therfore no light returns from the upper surface of the cell and the display appears to be dark. Because the LDR does not emit light, it dissipates little power. Liquid Crystal Displays, unlike LEDs, are not available as signal units and are generally manufactured in the form of a 7-segment display. The metal oxide film electrode on the surface of the upper glass plate is formed into the shape of the required 7 segments, each of which is taken to a separate contact, and the lower glass plate has a common electrode or backplate deposited on it. The idea is shown by fig 7 With this arrangement a voltage can be applied between the backplate and any one, or more of the seven segments to make that, or those particular segment(s) appear to be dark and thereby display the required number. Nematic liquid crystal displays posses a number of advantages which have led to their widespread use in battery operated equipment. First, their power consumation is very small, about 1 m W per segment (much less than the LED); secondly their visibility is not affected by bright incident light (such as sunlight ); and third, they are compatible with the low-power NMOS/CMOS circuitry. Fig 7 LCD 7-segment Display The charactics of LCD display are displayed in appendix 4 Seven Segment Displays Seven Segment displays are generally used as numerical indicators and consist of a number of LEDS arranged in seven segments as shown in Fig 8 (A). Any number between 0 and 9 can be indicated by lighting the appropriate segments ass shown in Fig 8 (B). A typical 7-segment display is manufactured in a 14-pin dil package with the cathode of each LED being brought out to each terminal with the common anode. Fig.8 (A) Fig 8 (B) Clearly, the 7-seqment display needs a 7-bit input signal and so a decoder is required to convert the digital signal to be displayed into the corresponding 7-segment signal. Decoder/driver circuits can be made using SSI devices but more usually a ROM or a custom-built IC would be used. Fig.9 (A) shows one arrangement, in which the BCD output of a decade counter is converted to a 7-segment signal by a decoder. When a count in excess of 9 is required, a second counter must be used and be connected in the manner shown by fig 10 (B).The tens counter is connected to the output of the final flip-flop of the units counter in the same way as the flip-flops inside the counters are connected. Decade BCD to 7-segment Counter 7-segment decoder display Fig 10 (A) Fig 10 (B) Decade Decoder 7-segment counter display Dot Matrix Displays A dot matrix display allows each alphanumeric character to be indicated by illuminating a number of dots in a 5 * 7 dot matrix. To allow for lower case letters and for spaces in between adjacent rows and columns each character fount is allocated a 6 * 12 space. Fig.11.1 shows 6 * 12 dot matrix. Every location in the dot matrix has a LED connected, as shown by Fig 11.2 for the top two rows of the matrix only. All the cathodes of the LEDs in one row, and all the anodes in one column are connected together. By addressing the appropriate locations in the diode and making the LEDs at those points to glow visibly any number or character in the set can be illuminated. Some examples are given in Fig.??? The circuitry required to drive a dot matrix display is too complex to be implemented using SSI devices. One 3-chip LSI dot matrix display controller, the Rockwell 10939, 10942 and 10943, is a general-purpose controller which is able to interface with other kinds of dot matrix as well as LED type.The controller can drive up to 46 dots and up to 20 characters selected out of the full 96 character ASCII code. ` Fig 11.1 Fig 11.2 Bibliology Microelectronic Systems a practical approach W Ditch Basic Electrical And Electronic Engineering Ec.Bell and R.W. Bolton Electronic and Electronic Principles for Technicians D.C Green Data Conversion Components Datel R S Data Library R S Components f:\12000 essays\technology & computers (295)\An Ergonomic Evaluation of Kinesis Ergonomic Computer Keybo.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1. Introduction In this information-technology age, everyday tasks are more and more related to computer. That ranges from basic jobs such as providing food recipes for housewives to complicated ones such as analyzing laboratory experimental data for scientists and engineers. This popularity of computer means that the time one has to spend with computer would be a lot more than in the past. Until now, the computers and computer peripherals in the market have been made according to the same design as the ones invented decades ago when computers are used only in large-scale scientific projects or big corporations. That means for most people the ergonomic value of these products obviously was not taken into account when designing them. Fortunately, at the moment, more companies are trying to change the way people work with computer by marketing a number of ergonomic products most notably keyboard, mouse and monitor. There are ergonomic keyboards, mice and monitors being released all the time. The reason why the focus is on these products is that they are the parts of computer one interfaces with the most while working with computer. The subject of whether these ergonomic keyboards, mice, monitors and other products really work attracts a lot of regular computer users. Thus, studies dedicated to it have been done. This report is based on one of the studies about an ergonomic keyboard from a manufacturer called Kinesis. This study looks not only on the effect of the keyboard on the users' body by mean of electromyographic activity but also on the learning rate of the users changing to this new style of keyboard. This is very useful since slow learning rate would lead to the decrease in effectiveness of work. Introduced in 1868 by Christopher Sholes, computer keyboard is still the primary data entry mode for most computer users. With the increase of computer, hence keyboard, usage at the moment, these problems of the keyboard users known as operator stress problems have developed. This is a kind of cumulative trauma disorders which is mainly caused by working excessively or repetitively with the same thing, keyboard, in this case, in the same position for a long period of time. This kind of disorder is considered to be the most expensive and severe one occurring in office environment. This leads to an amount of alternative designs introduced in the market with the main intention of reducing muscular stress required for typing. The reason why these designs have not yet replaced the old one is because of the familiarity of the users to the old design. This means an amount of retraining time is required to familiarize the users to a new design of keyboard and thus the one requiring less time is likely to be the choice. This study main objectives are to measure and analyze initial learning rate and electromyographic activity, explained later, while using an alternative design of keyboard, the Kinesis Ergonomic Computer Keyboard (figure1.) These data are then used to compare to the standard computer keyboard, the old design, to see if it is worth the time and money spent on the new product. The electromyographic signals used to examine the muscle activities in this study are signals generated by muscles. These signals can sometimes be used to control artificial body limbs especially ones requiring sensitive or complicated degree of control such as rotary or grasping motion. Systems that use such signals are called myoelectric systems. The Kinesis keyboard utilizes the same QWERTY layout as the standard design so that users do not have to relearn typing all over again. The key ergonomic features of this keyboard are: · The distance between centers of the halves of the Kinesis keyboard is approximately 27 cm, reducing the angle of adduction of the wrists to near zero for most adults. · The keypads slope downward from inside to outside edge, and are concave to better fit the natural shape of the operator's hands. The keys form straight columns and slightly curved rows. · The keyboard features a built-in forearm-wrist support extending approximately 14 cm from the home row to the edge. · The keyboard features separate thumb-operated keypads to redistribute the workload from the little fingers to the thumbs. These keypads consists of the enter, space, backspace, delete and combination (ctrl and alt) keys. · Detachable numeric/cursor pad. · Integral palm supports. · Shorter reach for function keys. Figure 1. The Kinesis Ergonomic Computer Keyboard. 2. Details 2.1 Materials and methods There were 6 female professional typists participants of age 29 to 52 and typing experience of 10 to 32 years involved in this experiment. Typing speed in words per minute, typing accuracy in percentage of characters typed correctl f:\12000 essays\technology & computers (295)\An essay on computer communications.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Communications. I could barely spell the word, much less comprehend its meaning. Yet when Mrs. Rubin made the announcement about the new club she was starting at the junior high school, it triggered something in my mind. Two weeks later, during the last month of my eighth grade year, I figured it out. I was rummaging through the basement, and I ran across the little blue box that my dad had brought home from work a year earlier. Could this be a modem? I asked Mrs. Rubin about it the next day at school, and when she verified my expectations, I became the first member of Teleport 2000, the only organization in the city dedicated to introducing students to the information highway. This was when 2400-baud was considered state-of-the-art, and telecommunications was still distant from everyday life. But as I incessantly logged onto Cleveland Freenet that summer, sending e-mail and posting usenet news messages until my fingers bled, I began to notice the little things. Electronic mail addresses started popping up on business cards. Those otherwise-incomprehensible computer magazines that my dad brought home from work ran monthly stories on communications-program this, and Internet-system that. Cleveland Freenet's Freeport software began appearing on systems all over the world, in places as far away as Finland and Germany - with free telnet access! I didn't live life as a normal twelve-year-old kid that summer. I sat in front of the monitor twenty-four hours a day, eating my meals from a plate set next to the keyboard, stopping only to sleep. When I went back to school in the fall, I was elected the first president of Teleport 2000, partially because I was the only student in-the school with a freenet account, but mostly because my enthusiasm for this new, exciting world was contagious. Today, as the business world is becoming more aware of the advantages of telecommunications, and the younger generation is becoming more aware of the opportunities, it is successfully being integrated into all aspects of our society. Companies are organizing Local Area Networks and tapping into information resources through internal networking and file sharing, and children of all ages are entertained by the GUI-based commercial systems and amazed by the worldwide system of gopher and search services. As a result, a million more people join the 'net every month, according to a 1994 article by Vic Sussman in U.S. News & World Report. They say that the worldwide community used to double its knowledge every century. Right now, that rate has been reduced to seven years, and is constantly decreasing. I've learned more since I started traveling the information highway than I could have possibly imagined. Through File Transfer Protocol sites, I can download anything from virus-detection utilities to song lyrics and guitar tabs. I receive press releases, proclamations and international news from the White House via a mailing list. I even e-mailed President Clinton recently and received a response the next day. And it was just a few months ago that I hung up my 2400-baud modem for a replacement six times as fast. The essence of this international system of systems was neatly summed up by David S. Jackson and Suneel Ratan in a recent Time article: "The magic of the Net is that it thrusts people together in a strange new world, one in which they get to rub virtual shoulders with characters they might otherwise never meet." To me, this electronic "Cyberspace" was like kindergarten all over again. It was not only an introduction to a whole new world of exciting opportunities, but it helped me take a step further into maturity. Communicating with others on this alternate plane of reality was so different, yet so similar, to the world I had already experienced. The Internet is a place where the only way you can view people is by how they choose to display themselves. Because you can't see other users, you can't make any prejudgments based upon race, sex, or physical handicap. As stated by John R. Levine and Carol Baroudi in The Internet for Dummies, 'Who you are on the Internet depends solely on how you present yourself through your keyboard." The reason for this is simple. The people who created this form of communication weren't interested in that. They didn't care about political or ethnic boundaries; they only cared about the abstract. As a result, the parallel world they conceived contained a true form of equality. "One computer is no better than any other, and no person is better than any other," wrote Levine and Baroudi, and the only way this right can be taken away from you is if you choose to remove it yourself. My realization of this concept taught me a lot about the faults of the real world, and why so many people feel the need to defect to Cyberspace so frequently. I believe in the future - not the extreme 1984; 2001: A Space Odyssey future, but the inevitable progression from today into tomorrow. The people of tomorrow will not be puzzled by the word "Internet" or the mechanics behind networking - these will be basic survival skills in society. The future will see an electronically-linked global community, in which everyone is a citizen. The constant thickening of the worldwide web of networks excites me, because it proves that the world is not as big as one may think. You really can reach out to anyone you want in a matter of milliseconds. The other day, I was helping a ten-year-old girl find an e-mail "key-pal" from Australia. I think I see a lot of me, the curious eighth-grader, in her. Perhaps I see a lot of the future, too. f:\12000 essays\technology & computers (295)\Anatomia.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ nocoes gerais de ANATOMIA 1. Ossos O osso, ou tecido ˘sseo, ‚ uma forma rĦgida de tecido conectivo que forma a maior parte do esqueleto. O sistema esquel‚tico ou esqueleto (G. seco) do adulto consiste em mais de 200 ossos que constituem a estrutura de sustenta‡Ĉo do corpo. Algumas cartilagens tamb‚m sĈo incluĦdas no sistema esquel‚tico ( p. ex., as cartilagens costais que unem as extremidades anteriores das costelas ao esterno). As liga‡äes entre os componentes do esqueleto sĈo denominadas articula‡äes; a maioria delas permite movimento. O sistema esquel‚tico consiste em duas partes principais; (1) o esqueleto axial, composto do crƒnio, coluna vertebral, esterno e costelas e (2) o esqueleto apendicular, formado pelos cĦngulos peitoral e p‚lvico e ossos dos membros. O estudo dos ossos ‚ denominado osteologia. Embora os ossos estudados no laborat˘rio sejam sem vida e secos, devido a remo‡Ĉo das suas proteĦnas, os ossos sĈo ˘rgĈos vivos no corpo que mudam consideravelmente ... medida que se envelhece. A exemplo de outros ˘rgĈos, os ossos possuem vasos sanguĦneos, vasos linf ticos e nervos e podem ser comprometidos por doen‡as. Osteomielite ‚ uma inflama‡Ĉo da medula ˘ssea e osso adjacente. Quando quebrado ou fraturado, o osso cicatriza. Ossos nĈo usados, p. ex. num membro paralisado, sofrem atrofia (isto ‚, tornam-se mais finos e mais fracos). O osso pode ser absorvido, como ocorre ap˘s a perda ou extra‡Ĉo de dentes. Os ossos tamb‚m sofrem hipertrofia (ou seja, tornam-se mais espessos e fortes) quando tˆm um maior peso para sustentar. Ossos de pessoas diferentes exibem varia‡äes anat"micas. Variam de acordo com a idade, sexo, caracterĦsticas fĦsicas, sa£de, dieta, ra‡a e com diferentes condi‡äes gen‚ticas e endocrinol˘gicas. As varia‡äes anat"micas sĈo proveitosas na identifica‡Ĉo de restos esquel‚ticos, um aspecto da medicina forense ( a rela‡Ĉo e aplica‡Ĉo de fatos m‚dicos aos problemas legais). Os ossos vivos sĈo tecidos amold veis que contˆm componentes orgƒnicos e inorgƒnicos. Consistem essencialmente em material intercelular impregnado de substƒncias minerais, principalmente fosfato de c lcio hidratado, isto ‚, Ca3 (PO4)2. As fibras col genas no material intercelular conferem aos ossos elasticidade e resistˆncia, enquanto os cristais de sais na forma de tubos e bastäes lhe conferem dureza e alguma rigidez. Quando um osso ‚ descalcificado no laborat˘rio por submersĈo, por alguns dias, em cido diluĦdo, seus sais sĈo removidos, por‚m o material orgƒnico permanece. O osso ret‚m sua forma, mas est tĈo flexĦvel que se pode dar um n˘. Um osso calcinado tamb‚m ret‚m sua forma, mas seu tecido fibroso ‚ destruĦdo. Em consequˆncia, torna-se quebradi‡o, inel stico e esmigalha-se facilmente. A quantidade relativa de substƒncia orgƒnica para inorgƒnica nos ossos varia com a idade. A substƒncia orgƒnica ‚ maior na infƒncia; por isso, os ossos de crian‡as curvam-se um pouco. Em alguns dist£rbios metab˘licos como o raquitismo e osteomal cia, h uma calcifica‡Ĉo inadequada da matriz dos ossos. Como o c lcio confere dureza aos ossos, as reas nĈo-calcificadas arqueiam-se um pouco, particularmente se sĈo ossos que sustentam peso. Isso acarreta deformidades progressivas, como o joelho valgo. Embora o diagn˘stico de raquitismo seja sugerido pelo alargamento clĦnico nos locais das placas de cartilagem epifis ria, o diagn˘stico ‚ confirmado pelas tĦpicas altera‡äes radiogr ficas que ocorrem nas extremidades em crescimento dos ossos longos e costelas nesses pacientes. As fraturas sĈo mais comuns em crian‡as que em adultos, devido ... combina‡Ĉo de seus ossos mais delgados e atividades descuidadas. Felizmente, muitas dessas fraturas sĈo muito finas ou do tipo em galho verde, que nĈo sĈo graves. Numa fratura em galho verde, o osso quebra como um ramo de salgueiro. As fraturas da placa de cartilagem epifis ria sĈo graves porque podem resultar na fusĈo prematura da di fise e epĦfise, com subsequente encurtamento do osso; p. ex. , a fusĈo prematura de uma epĦfise radial acarreta um desvio radial progressivo da mĈo ... medida que a ulna continua a crescer. A existˆncia de epĦfises nĈo-fundidas em pessoas jovens pode ser muito proveitosas no tratamento delas; p. ex., a coloca‡Ĉo de grampos atrav‚s da placa de cartilagem epifis ria no joelho interrompe o crescimento no membro inferior. SĈo os ossos da perna normal que sĈo grampeados para permitir que os ossos da perna curta alcancem a primeira. Felizmente, as fraturas se consolidam mais rapidamente em crian‡as que em adultos. Uma fratura femoral ocorrida ao nascimento est unida em trˆs semanas, enquanto que a uniĈo demora at‚ 20 semanas em pessoas de 20 anos ou mais. Durante a idade avan‡ada, os componentes orgƒnico e inorgƒnico do osso se reduzem, produzindo uma condi‡Ĉo denominada osteoporose. H uma redu‡Ĉo na quantidade de osso (atrofia do tecido esquel‚tico) e, em consequˆncia, os ossos das pessoas idosas perdem elasticidade e fraturam facilmente. Por exemplo, as pessoas senis podem trope‡ar numa pequena saliˆncia enquanto andam, sentir ou ouvir o colo do seu fˆmur (osso da coxa) quebrar e cair no chĈo. As fraturas do colo do fˆmur sĈo especialmente comuns em mulheres idosas, porque a osteoporose ‚ mais intensa nelas do que nos homens idosos. TIPOS DE OSSO H dois tipos principais de osso, esponjoso e compacto, mas nĈo h limites preciosos entre os dois tipos, uma vez que as diferen‡as entre eles dependem da quantidade relativa de substƒncia s˘lida e do n£mero e tamanho dos espa‡os em cada um deles. Todos os ossos tˆm uma arruma‡Ĉo externa de substƒncia compacta ao redor de uma massa central de substƒncia esponjosa, exceto onde a £ltima ‚ substituĦda , por uma cavidade medular ou um espa‡o a‚reo, p. ex., os seios paranasais. A substƒncia esponjosa consiste em trab‚culas finas e irregulares de substƒncia compacta que se ramificam e se unem umas com as outras para formar espa‡os intercomunicantes, que sĈo preenchidos com medula ˘ssea. As trab‚culas da substƒncia esponjosa sĈo arranjadas em linhas de pressĈo e tensĈo. Nos adultos h dois tipos de medula ˘ssea, vermelha e amarela. A medula ˘ssea vermelha ‚ ativa na forma‡Ĉo de sangue (hematopoese), ao passo que a medula ˘ssea amarela ‚ sobretudo inerte e gordurosa. Na maioria dos ossos longos h uma cavidade medular no corpo ou di fise, que cont‚m medula ˘ssea amarela na vida adulta. Na medula ˘ssea amarela, a maior parte do tecido hematopo‚tico, foi substituĦda por gordura. A substƒncia compacta parece s˘lida, exceto por espa‡os microsc˘picos. Sua estrutura cristalina lhe confere dureza e rigidez e torna-se opaca ao raio-X. Classifica‡Ĉo dos ossos. Os ossos podem ser classificados regionalmente como axiais (crƒnio, v‚rtebras, costelas e externo) ou apendiculares (ossos dos membros superiores e inferiores e ossos associados a estes). Tamb‚m se classificam os ossos de acordo com sua forma. 1. Os ossos longos sĈo de forma tubular e possuem um corpo (di fise) e duas extremidades, que sĈo c"ncavas ou convexas. O comprimento dos ossos longos ‚ maior que sua largura, embora alguns ossos longos sejam pequenos (p. ex., nos dedos). As extremidades dos ossos longos articulam-se com outros ossos; assim, elas sĈo dilatadas, lisas e cobertas com cartilagem hialina. Geralmente, a di fise de um osso longo ‚ oca e tipicamente apresenta trˆs margens separando suas trˆs faces. 2. Os ossos curtos sĈo de forma cub˘ide e encontrados apenas no p‚ e no pulso, p. ex., os ossos do carpo. Apresentam seis faces, das quais quatro ou menos sĈo articulares e duas ou mais sĈo para fixa‡Ĉo de tendäes e ligamentos e para entrada de vasos sanguĦneos. 3. Os ossos planos consistem em duas placas de osso compacto com osso esponjoso e medula entre elas, p. ex., os ossos da calv ria (ab˘bada craniana), o externo e a esc pula (exceto pela parte delgada desse osso). O espa‡o medular entre as lƒminas externa e interna dos ossos planos do crƒnio ‚ conhecido como dĦploe (G. duplo). A maioria dos ossos planos ajuda a formar paredes de cavidades (p. ex., a cavidade do crƒnio); por isso, a maior parte deles ‚ levemente encurvada ao inv‚s de plana. No inĦcio da vida, um osso plano consiste numa fina camada de substƒncia compacta, por‚m a medula (p. ex., dĦploe) surge no seu interior durante a segunda infƒncia, resultando em camadas compactas de cada lado da cavidade medular. 4. Os ossos irregulares exibem formas variadas (p. ex., ossos da face e v‚rtebras). Os corpos das v‚rtebras possuem algumas caracterĦsticas de ossos longos. 5. Os ossos pneum ticos cont‚m cavidades (c‚lulas) aerĦferas ou seios, p. ex., as c‚lulas aerĦferas mast˘ideas na parte mast˘idea do osso temporal e os seios paranasais. Evagina‡äes da membrana mucosa da orelha m‚dia e da cavidade do nariz invadem a cavidade medular, produzindo, respectivamente, as c‚lulas aerĦferas e seios. 6. Os ossos sesam˘ides sĈo n˘dulos ˘sseos arredondados ou ovais que se desenvolvem em certos tendäes (p. ex., a patela no tendĈo do quadrĦceps da coxa, e o pisiforme no tendĈo do flexor ulnar do carpo). Tais ossos foram denominados sesam˘ides em virtude de sua semelhan‡a a sementes de s‚samo. SĈo comumente encontrados onde tendäes cruzam as extremidades dos ossos longos nos membros. Protegem o tendĈo de um desgaste excessivo e mudam o ƒngulo do tendĈo quando ele passa para se inserir. Isso resulta numa maior vantagem mecƒnica na articula‡Ĉo. A face articular de um osso sesam˘ide ‚ coberta de cartilagem articular, enquanto o resto ‚ incrustado no tendĈo. 7. Os ossos acess˘rios desenvolvem-se quando surge um centro de ossifica‡Ĉo adicional dando origem a um osso, ou quando um dos centros usuais nĈo se funde ao osso principal. A parte separada do osso da a impressĈo de um osso supranumer rio. Os ossos acess˘rios sĈo comuns no p‚ e ‚ importante conhecˆ-los para que nĈo sejam confundidos com lascas de ossos ou fraturas em radiografias. 8. Os ossos heterot˘picos sĈo aqueles que nĈo pertencem ao esqueleto principal, mas podem desenvolver-se em certos tecidos moles e ˘rgĈos em decorrˆncia de doen‡a. Esse tipo de osso pode formar-se em cicatrizes, e uma inflama‡Ĉo cr"nica, caracterĦstica da tuberculose, pode produzir tecido ˘sseo no pulmĈo. ACIDENTES àSSEOS A superfĦcie dos ossos nĈo ‚ lisa e polida nem mesmo em contorno, exceto nas reas cobertas por cartilagem e onde os tendäes , vasos sanguĦneos e nervos passam em sulcos (p. ex., o sulco intertubercular na cabe‡a do £mero e o sulco do nervo radial na sua di fise ). Os ossos exibem uma variedade de saliˆncias, depressäes e orifĦcios. Os acidentes encontrados em ossos secos em qualquer rea onde os tendäes, ligamentos e f scia estavam fixados. A fixa‡Ĉo das fibras musculares de um m£sculo nĈo causam nenhum acidente num osso. Os acidentes ˘sseos come‡am a tornar-se proeminentes durante a puberdade (12 ... 16 anos) e ficam cada vez mais acentuados na idade adulta. Os acidentes recebem nomes para ajudar a distingui-los. Eleva‡äes. Os diversos tipos de eleva‡Ĉo nos ossos sĈo citados abaixo em ordem de proeminˆncia. Examine cada tipo num esqueleto. Uma eleva‡Ĉo linear ou pouco saliente ‚ referida como uma linha (p. ex., a linha [LMCCL1]nucal superior do osso occipital, e linha supracondilar medial). Linhas muito proeminentes sĈo chamadas cristas (p. ex., a crista ilĦaca, e a crista p£bica). Uma eleva‡Ĉo arredondada ‚ denominada (1) tub‚rculo (pequena eminˆncia saliente); (2) protuberƒncia (uma tumefa‡Ĉo ou calombo, p. ex., protuberƒncia occipital externa). (3) trocanter (uma grande eleva‡Ĉo romba, p. ex., o trocanter maior do fˆmur); (4) tuberosidade ou t£ber (uma grande eleva‡Ĉo); e (5) mal‚olo (uma eleva‡Ĉo semelhante ... cabe‡a de um martelo). Uma eleva‡Ĉo pontiaguda ou parte salientada ‚ chamada de espinha, p. ex., a espinha ilĦaca ƒntero-superior, ou processo, p. ex., o processo espinhoso de uma v‚rtebra. As facetas (Fr. pequenas faces) sĈo pequenas reas ou superfĦcies de um osso, lisa e planas, especialmente onde ele se articula com outro osso. As facetas articulares sĈo cobertas com cartilagem hialina (p. ex., as facetas de uma v‚rtebra). Uma rea articular arredondada de um osso ‚ denominada cabe‡a (p. ex., a cabe‡a do £mero) ou c"ndilo, p. ex., o c"ndilo lateral do fˆmur. Um epic"ndilo ‚ um processo proeminente logo acima de um c"ndilo. Depressäes. Pequenas concavidades nos ossos sĈo descritas como fossas, enquanto as depressäes estreitas e longas sĈo referidas como sulcos. Uma reentrƒncia na margem de um osso ‚ chamada de incisura, p. ex., a incisura do acet bulo. Forames e Canais. Quando uma incisura ‚ fechada por um ligamento ou osso de modo a formar uma perfura‡Ĉo ou buraco, ‚ denominada forame (p. ex., forame magno). Um forame que tenha extensĈo ‚ chamado de canal (p. ex., canal facial ). Um canal tem um orifĦcio em cada extremidade. Um meato (uma passagem) ‚ um canal que entra numa estrutura mas nĈo a atravessa, p. ex., o meato ac£stico externo ou canal auditivo. DESENVOLVIMENTO DOS OSSOS Os tecidos desenvolvem-se a partir de condensa‡äes do mesˆnquima (tecido conectivo embrion rio). O modelo mesˆnquimal de um osso que se forma durante o perĦodo embrion rio pode sofrer ossifica‡Ĉo direta, denominada ossifica‡Ĉo intramembran cea (forma‡Ĉo ˘ssea membran cea), ou ser substituĦdo por um modelo de cartilagem; o £ltimo torna-se ossificado por ossifica‡Ĉo intracartilaginosa (forma‡Ĉo ˘ssea endocondral). Em resumo, o osso substitui membrana ou cartilagem. O processo de ossifica‡Ĉo ‚ semelhante em ambos os casos e a estrutura histol˘gica final do osso ‚ idˆntica. A ossifica‡Ĉo intramembran cea ocorre rapidamente e se d em ossos que sĈo urgentemente necess rios para prote‡Ĉo (os ossos planos da calv ria ou ab˘boda craniana). A ossifica‡Ĉo intracartilaginosa, que ocorre na maioria dos ossos do esqueleto, ‚ um processo bem mais lento. Desenvolvimento dos Ossos longos. A primeira indica‡Ĉo de ossifica‡Ĉo no modelo cartilaginoso de um osso longo ‚ visĦvel pr˘ximo ao centro da futura di fise, denominada centro prim rio de ossifica‡Ĉo. Os centros prim rios aparecem em ‚pocas diversas os diferentes ossos em desenvolvimento, por‚m a maioria dos centros de ossifica‡Ĉo surge entre 7( e 12( semanas de vida pr‚-natal. Praticamente todos os centros estĈo presentes ao nascimento. Nessa ‚poca, a ossifica‡Ĉo a partir do centro prim rio ter quase atingido as extremidades do modelo de cartilagem do osso longo. A parte do osso formada a partir de um centro prim rio ‚ denominada di fise. Ao nascimento, centros de ossifica‡Ĉo adicionais podem surgir nas extremidades cartilaginosas de um osso longo. Estes sĈo referidos como epĦfises ou centros secund rios de ossifica‡Ĉo. A maioria dos centros secund rios de ossifica‡Ĉo aparece ap˘s a nascimento. As partes de um osso formadas a partir dos centros secund rios sĈo chamadas de epĦfises. As epĦfises ou centros secund rios de ossifica‡Ĉo dos ossos do joelho sĈo os primeiros a aparecer. Podem estar presentes ao nascimento. As epĦfises cartilaginosas sofrem as mesmas altera‡äes que ocorrem na di fise. Em consequˆncia, o corpo do osso torna-se revestido em cada extremidade por osso, as epĦfises, que se desenvolvem a partir dos centros secund rios de ossifica‡Ĉo. A parte da di fise mais pr˘xima da epĦfise ‚ referida como met fise. A di fise cresce em extensĈo por prolifera‡Ĉo da cartilagem na met fise. A fim de possibilitar a continua‡Ĉo do crescimento em extensĈo at‚ que o comprimento adulto de um osso seja alcan‡ado, o osso formado a partir do centro prim rio de ossifica‡Ĉo na di fise nĈo se funde com aquele formado a partir dos centros secund rios nas epĦfises enquanto o osso nĈo atingir o tamanho adulto. Durante o crescimento de um osso, uma lƒmina de cartilagem, conhecida como placa de crescimento ou placa de cartilagem epifis ria, interpäe-se entre a di fise e a epĦfise. Por concisĈo, ‚ ami£de chamada de placa epifis ria. A di fise consiste num tubo oco de substƒncia compacta circundando a cavidade medular, ao passo que as epĦfises e met fises consistem em substƒncia esponjosa coberta por uma fina camada de substƒncia compacta. O osso compacto sobre as faces articulares das epĦfises ‚ logo coberto por uma cartilagem hialina denominada cartilagem articular. Durante os dois primeiros anos p˘s-natais, surgem centros de ossifica‡Ĉo secund rios nas epĦfises que sĈo expostas a pressĈo (p. ex., no joelho e quadril ). Tais centros, geralmente referidos como epĦfises de pressĈo, estĈo situados nas extremidades dos ossos longos, onde estĈo sujeitas ... pressĈo de ossos opostos na articula‡Ĉo que eles formam. Alguns centros de ossifica‡Ĉo secund rios ossificam partes de um osso associadas ... fixa‡Ĉo de m£sculos e fortes tendäes. Estes centros sĈo geralmente chamados de epĦfises de tra‡Ĉo (p. ex., os tub‚rculos do £mero ). Tais epĦfises estĈo sujeitas ... tra‡Ĉo mais do que ... pressĈo. As placas de cartilagem epifis ria sĈo posteriormente substituĦdas pelo desenvolvimento de osso em cada um dos seus lados, diafis rio e epifis rio. Quando isso ocorre , o crescimento do osso cessa e a di fise funde-se ...s epĦfises por uniĈo ˘ssea ou sinostose. O osso formado no local da placa de cartilagem epifis ria ‚ particularmente denso e ainda reconhecĦvel nas radiografias de crian‡as e adolescentes. O conhecimento desse detalhe previne a confusĈo com linhas de fratura. Em geral, a epĦfise de um osso longo cujo centro de ossifica‡Ĉo apareceu por £ltimo ‚ a primeira a fundir-se com a di fise. Quando uma epĦfise se forma a partir de mais de um centro (p. ex., a extremidade proximal do £mero ), os centros fundem-se entre si antes da uniĈo da epĦfise com a di fise. As altera‡äes nos ossos em desenvolvimento sĈo clinicamente importantes. Os m‚dicos e dentistas, especialmente os radiologistas, pediatras, ortodentistas e cirurgiäes ortopedistas, devem estar instruĦdos acerca do crescimento ˘sseo. A ‚poca de aparecimento das diversas epĦfises varia com a idade cronol˘gica. Como estĈo disponĦveis boas tabelas de referˆncias, nĈo tem sentido memorizar as datas de aparecimento e desaparecimento dos centros de ossifica‡Ĉo de todos ossos. Um radiologista determina a idade ˘ssea de uma pessoa estudando os centros de ossifica‡Ĉo. Dois crit‚rios sĈo usados: (1) o aparecimento de material calcificado na di fise e/ou epĦfises. A ‚poca de aparecimento ‚ especificada para cada epĦfise e di fise de cada osso para cada um dos sexos; e (2) o desaparecimento da linha escura que representa a placa de cartilagem epifis ria. Isto indica que a epĦfise se fundiu ... di fise e ocorre em ‚pocas determinadas para cada epĦfise. A fusĈo das epĦfises com a di fises ocorre 1 a 2 anos mais cedo no sexo feminino do que no masculino. A determina‡Ĉo da idade ˘ssea ‚ ami£de usada na defini‡Ĉo da idade aproximada de restos de esqueletos humanos em casos m‚dico-legais. Algumas doen‡as aceleram e outras alentecem os tempos de ossifica‡Ĉo em compara‡Ĉo com a idade cronol˘gica do indivĦduo. O esqueleto em crescimento ‚ sensĦvel a doen‡as relativamente leves e transit˘rias e a perĦodos de desnutri‡Ĉo. A prolifera‡Ĉo de cartilagem na met fise se reduz durante a inani‡Ĉo e doen‡as, mas a degenera‡Ĉo de c‚lulas cartilaginosas nas colunas prossegue, produzindo uma linha densa de calcifica‡Ĉo provis˘ria que depois se torna osso com trab‚culas mais grossas, denominadas de linhas de parada do crescimento. Sem um conhecimento b sico do crescimento ˘sseo e do aspecto dos ossos nas radiografias em idades diversas, poder-se-ia confundir uma placa de cartilagem epifis ria com uma fratura ou interpretar a separa‡Ĉo de uma epĦfise como normal. Se vocˆ conhece a idade do paciente e a localiza‡Ĉo das epĦfises, esses erros podem ser evitados, especialmente se vocˆ notar que as margens da di fise e epĦfise sĈo suavemente encurvadas na regiĈo da cartilagem epifis ria. Uma fratura deixa uma margem abrupta e geralmente irregular de osso. Uma lesĈo que cause uma fratura no adulto pode causar deslocamento de uma epĦfise num jovem. Desenvolvimento dos Ossos curtos. O desenvolvimento dos ossos curtos ‚ semelhante ao do centro prim rio dos ossos longos e apenas um osso, o calcĈneo, desenvolve um centro secund rio de ossifica‡Ĉo. Suprimento sanguĦneo dos ossos. Os ossos sĈo ricamente supridos de vasos sanguĦneos que os penetram a partir do peri˘steo, a membrana de tecido conectivo fibroso que os reveste. As art‚rias periostais entram na di fise em in£meros pontos e sĈo respons veis por sua nutri‡Ĉo. Assim, um osso cujo peri˘steo ‚ removido morrer . Pr˘ximo ao centro da di fise de um osso longo, uma art‚ria nutrĦcia passa obliquamente atrav‚s da substƒncia compacta e alcan‡a a substƒncia esponjosa e a medula. Algumas epĦfises de pressĈo sĈo, na sua maior parte, cobertas por cartilagem articular hialina. Recebem seu suprimento sanguĦneo da regiĈo da placa de cartilagem epifis ria. Tais epĦfises (p. ex., cabe‡a do fˆmur) sĈo quase completamente cobertas por cartilagem articular e recebem seu suprimento sanguĦneo de vasos que penetram logo externamente ... margem da cartilagem articular. A perda de suprimento sanguĦneo para uma epĦfise ou para outras partes de um osso resulta em morte do tecido ˘sseo, uma condi‡Ĉo denominada necrose avascular (necrose isquˆmica ou ass‚ptica) do osso. Ap˘s toda fratura, diminutas reas contĦguas de osso sofrem necrose avascular. Em algumas fraturas, pode ocorrer necrose de um grande fragmento do osso caso seu suprimento sanguĦneo tenha sido interrompido. Um grupo de desordens das epĦfises em crian‡as resulta de necrose avascular de etiologia desconhecida. SĈo referidas como osteocondroses e geralmente envolvem uma epĦfise de pressĈo na extremidade de um osso longo. Inerva‡Ĉo nos ossos. O peri˘steo ‚ rico em nervos sensitivos, chamados de nervos periostais. Isto explica por que a dor por lesĈo ˘ssea geralmente ‚ intensa. Os nervos que acompanham as art‚rias no interior dos ossos sĈo provavelmente vasomotores (ou seja, causam constri‡Ĉo ou dilata‡Ĉo dos vasos nutrĦcios). ARQUITETURA DOS OSSOS A estrutura do osso varia de acordo com sua fun‡Ĉo. Nos ossos longos concebidos para rigidez e que servem de fixa‡äes de m£sculos e ligamentos, a quantidade de osso compacto ‚ relativamente maior pr˘ximo ao meio da di fise, onde estĈo sujeitos a empenar. A substƒncia compacta da di fise assegura arquiteturalmente resistˆncia para a sustenta‡Ĉo de peso. Ademais, conforme descrito previamente, os ossos longos tem eleva‡äes (linhas, cristas, tub‚rculos e tuberosidades) que servem como contrafortes nas reas onde os m£sculos potentes se fixam. Os ossos vivos possuem alguma elasticidade (flexibilidade) e muita rigidez (dureza). A elasticidade decorre da sua substƒncia orgƒnica (tecido fibroso), e a rigidez, das suas lƒminas e tubos de fosfato de c lcio inorgƒnico. Os sais, representando cerca de 60 % do peso de um osso, sĈo depositados na matriz de fibras col genas. Os ossos sĈo como madeira-de-lei ao resistir ... tensĈo e como concreto ao resistir ... compressĈo. Por dentro da arma‡Ĉo externa de substƒncia compacta, particularmente nas extremidades dos ossos longos, h substƒncia esponjosa que tem um aspecto semelhante a tela de arame. A substƒncia esponjosa nĈo ‚ disposta de maneira casual, mas sim composta de tubos e lƒminas que sĈo arranjados como escoras ao longo das linhas de pressĈo e tensĈo. A arquitetura das trab‚culas ˘sseas ‚ peculiar a cada pessoa, um fato de valor na identifica‡Ĉo de restos esquel‚ticos e um parte importante da medicina forense. Fun‡äes dos ossos. As principais fun‡äes dos ossos sĈo fornecer: 1. Prote‡Ĉo formando as paredes rĦgidas de cavidades (p. ex., cavidade do crƒnio) que contˆm estruturas vitais (p. ex., o enc‚falo). 2. Sustenta‡Ĉo (p. ex., a estrutura rĦgida para o corpo). 3. Uma base mecƒnica para o movimento ao assegurar fixa‡äes para os m£sculos e servir como alavancas para aqueles que produzem os movimentos permitidos pelas articula‡äes. 4. Forma‡Ĉo de c‚lulas sangĦneas. A medula ˘ssea vermelha nas extremidades dos ossos longos, esterno e costelas, v‚rtebras e na dĦploe dos ossos planos do crƒnio sĈo os locais de desenvolvimento de hem cias, alguns linf˘citos, granul˘citos e plaquetas do sangue. 5. Armazenamento de sais. Os sais de c lcio, f˘sforo e magn‚sio nos ossos proporcionam uma reserva mineral para o corpo 2. Articula‡äes O sistema articular consiste em articula‡äes ou junturas onde dois ou mais ossos relacionam-se entre si na sua regiĈo de contato. O estudo das articula‡äes ‚ chamado de artrologia. As articula‡äes sĈo classificadas segundo o tipo de material que as mant‚m unidas (p. ex., articula‡äes fibrosas, cartilagĦneas e sinoviais). ARTICULA€ċES FIBROSAS Os ossos envolvidos nessa articula‡äes estĈo unidos por tecido fibroso. A quantidade de movimento permitida na articula‡Ĉo depende do comprimento das fibras que unem os ossos. Suturas. Os ossos estĈo separados, embora mantidos unidos por uma fina camada de tecido fibroso. A uniĈo ‚ extremamente firme e h pouco ou nenhum movimento entre os ossos. As suturas ocorrem apenas no crƒnio; por isso, ...s vezes sĈo chamadas de articula‡äes "do tipo craniano". As margens dos ossos podem superpor-se (sutura escamosa) ou entrela‡ar-se (sutura serr til). No crƒnio de um rec‚m-nascido, os ossos da calv ria em crescimento nĈo estĈo em contato completo uns com os outros . Nos locais onde nĈo ocorre contato, as suturas sĈo reas largas de tecido fibroso conhecidas como fontanelas ou fontĦculos. Os termos fontanelas e fontĦculos significam "pequenas nascentes ou fontes". Provavelmente receberam essa denomina‡Ĉo porque em tempos remotos ter-se-iam realizado aberturas nesses pontos do crƒnio em lactentes com fontanelas abauladas em decorrˆncia da hipertensĈo intracraniana. Nesses casos, o lĦquido cerebrospinal (LCE) e o sangue que jorraram provavelmente lembravam uma fonte d' gua. O fontĦculo mais proeminente ‚ o anterior, que as pessoas leigas chamam de moleira. A separa‡Ĉo dos ossos nas suturas e fontĦculos do crƒnio do rec‚m-nado permite que eles se superponham durante o nascimento, facilitando a passagem de sua cabe‡a atrav‚s do canal de parto. O fontĦculo anterior nĈo costuma estar presente ap˘s os 18 a 24 meses de idade (isto ‚, apresenta a mesma largura das suturas do crƒnio). A uniĈo dos ossos no pt‚rio, situado no local do fontĦculo ƒntero-lateral, ter ocorrido aos seis anos em cerca de 50% da crian‡as. A fusĈo dos ossos atrav‚s das linhas de sutura (sinostose) come‡a na face interna da calv ria ou ab˘bada craniana no inĦcio da segunda d‚cada e progride por toda vida. Quase todas as suturas do crƒnio estĈo obliteradas em pessoas muito idosas. Sindesmose. Nesse tipo de articula‡Ĉo fibrosa, os dois ossos sĈo unidos por uma lƒmina de tecido fibroso. O tecido pode ser um ligamento ou uma membrana fibrosa inter˘ssea; p. ex., as margens inter˘sseas do r dio e ulna estĈo unidas pela membrana inter˘ssea do antebra‡o. Nas sindesmoses, consegue-se realizar um movimento de leve a consider vel. O grau de movimento depende da distƒncia entre os ossos e do grau de flexibilidade do tecido fibroso. A membrana inter˘ssea entre o r dio e a ulna no antebra‡o ‚ suficientemente larga e flexĦvel para permitir movimentos consider veis, como ocorre durante a prona‡Ĉo e supina‡Ĉo do antebra‡o. ARTICULA€ċES CARTILAGÖNEAS Os ossos envolvidos nessa articula‡äes sĈo unidos por cartilagem. Articula‡äes CartilagĦneas Prim rias (Sincondroses). Os ossos estĈo ligados por cartilagem hialina, que permite uma leve flexĈo no inĦcio da vida. As sincondroses geralmente representam condi‡äes tempor rias, p. ex., durante o perĦodo de desenvolvimento endocondral de um osso longo. Conforme descrito previamente, uma placa de cartilagem epifis ria separa as extremidades (epĦfises) e corpo (di fise) de um osso longo. Uma articula‡Ĉo cartilagĦnea do tipo sincondrose permite crescimento em extensĈo do osso. Quando o crescimento pleno ‚ atingido, a cartilagem ‚ convertida em osso e a epĦfise funde-se ... di fise; isto ‚, uma sincondrose ‚ convertida numa sinostose. Outras sincondroses sĈo permanentes, p. ex., onde a cartilagem costal da primeira costela une-se ao man£brio do esterno. Articula‡äes CartilagĦneas Secund rias (SĦnfises). As faces articulares dos ossos nessas articula‡äes estĈo cobertas por cartilagem hialina e essas faces cartilagĦneas estĈo unidas por tecido fibroso e/ou fibrocartilagem. As sĦnfises sĈo articula‡äes fortes e poucos movĦveis. As articula‡äes intervertebrais anteriores com seus discos intervertebrais sĈo classificadas como sĦnfises. SĈo concebidas para resistˆncia e absor‡Ĉo de choque. Os corpos das v‚rtebras estĈo ligados por ligamentos longitudinais e pelos an‚is fibrosos dos discos intervertebrais. Cumulativamente, esses discos fibrocartilagĦneos conferem uma flexibilidade consider vel ... coluna vertebral. Outros exemplos de sĦnfises sĈo a sĦnfise p£bica entre os corpos dos ossos p£bis e a articula‡Ĉo manubriosternal entre o man£brio e corpo do esterno. Durante a gravidez, a sĦnfise p£bica e outras articula‡äes da pelve sofrem altera‡äes que possibilitam movimentos mais livres. Acredita-se que os ligamentos associados a essas articula‡äes sejam "amolecidos" pelo horm"nio relaxina. As altera‡äes produzidas nas articula‡äes permitem que a cavidade p‚lvica aumente, o que facilita o parto. ARTICULA€ċES SINOVIAIS As articula‡äes sinoviais, tipo mais comum e mais importante funcionalmente, normalmente proporcionam livres movimentos entre os ossos unidos. As quatro caracterĦsticas tĦpicas de uma articula‡Ĉo sinovial sĈo que elas tˆm (1) uma cavidade articular, (2) uma cartilagem articular, (3) uma membrana s f:\12000 essays\technology & computers (295)\Aol is it for me .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ You have probably heard of the Internet, but you weren't really sure if it was for you. You thought about it, but after all it costs so much and things like pornography and improper language are used everywhere, right? Wrong! Perhaps, I can convince you that America Online will be worth your time and money. One of the main reasons that people don't go online is that they think that it costs too much. America Online or AOL doesn't really cost all that much. When you sign on you get from 10 to 50 hours free, depending on the software that you download. Once you run out of free hours you may choose to stay online with a monthly fee. This monthly fee can be either $9.95 or $19.95 depending on how many hours you plan on using. If you are concerned that your children will visit web pages you prefer that they don't, then you can put parental guards on that don't allow them to visit those web pages. If you aren't familiar with web pages, they are basically ads that you look at containing information about the company, person, or product. Also you can sign your child on as a child or teen which keeps them out of restricted areas. Perhaps your main concern is people finding out things that you don't want them to. They only know as much as you tell them. If they ask for your password, credit card number, or any other personal info, you don't have to tell them that information. When you first sign on AOL staff will ask for things like name, age, address, phone number, and your credit card or checking account number. These things remain confidential and are used only for billing purposes. If anyone ask for personal information you can easily report them to AOL. When someone is reported they are either warned or kicked off the Internet. You can also report people that swear or use any kind of offensive words. Many of the chat rooms are guarded by "online hosts" or people that belong to AOL. These "guards" make sure nothing bad happens in chat rooms. You can be sure that there are AOL staff in the romance rooms, especially, because that is where the most foul and vulgar language takes place. If you are too young to be in the room, they will tell you to leave and go to a room where people your age belong. The world "online" also offers thousands of Reference sources like Groliers Multimedia Encyclopedia and over 100 magazines. These Magazines alone are of great value to anyone who enjoys reading magazines. These References will tell you almost anything, but if you wanted to know about something that was not in these sources, then you can leave the New York Public Library's librarian a message. This person will respond within the week to your question. Finally, with your America Online subscription you get unlimited E-Mail. What is E-Mail you ask? Well E-Mail stands for electronic mail. It is a way to send letters to anyone in the world that is hooked up to The Internet or other online services. This mail is received almost instantly, within a few seconds. This way you could send letters to a pen pal in Egypt. Instead of waiting up to a month or more, he will receive it the same day. Having America Online opens you up to a whole new world of information and people. America Online provides an inexpensive yet secure place for work, education, and recreation. A family has so much to gain and little to lose by signing on today. f:\12000 essays\technology & computers (295)\Application Software.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ John Hassler Professor C. Mason Computer Information systems 204 September 13, 1996 Application Software Computer systems contain both hard and software. Hardware is any tangible item in a computer system, like the system unit, keyboard, or printer. Software, or a computer program, is the set of instruction that direct the computer to perform a task. Software falls into one of two categories: system software and application software. System software controls the operation of the computer hardware; whereas, application software enables a user to perform tasks. Three major types of application software on the market today for personal computers are word processors, electronic spreadsheets, and database management systems (Little and Benson 10-42). A word processing program allows a user to efficiently and economically create professional looking documents such as memoranda, letters, reports, and resumes. With a word processor, one can easily revise a document. To improve the accuracy of one's writing, word processors can check the spelling and the grammar in a document. They also provide a thesaurus to enable a user to add variety and precision to his or her writing. Many word processing programs also provide desktop publishing features to create brochures, advertisements, and newsletters. An electronic spreadsheet enables a user to organize data in a fashion similar to a paper spreadsheet. The difference is the user does not have to perform calculations manually; electronic spreadsheets can be instructed to perform any computation desired. The contents of an electronic spreadsheet can be easily modified by the user. Once the data is modified, all calculations in the spreadsheet are recomputed automatically. Many electronic spreadsheet packages also enable a user to graph the data in his or her spreadsheet (Wakefield 98-110). A database management system (DBMS) is a software program that allows a user to efficiently store a large amount of data in a centralized location. Data is one of the most valuable resources to any organization. For this reason, user desire data be organized and readily accessible in a variety of formats. With aDBMS, a user can then easily store data, retrieve data, modify data, analyze data, and create a variety of reports from the data(Aldrin 25-37). Many organizations today have all three of these types of application software packages installed on their personal computers. Word processors, electronic spreadsheets, and database management systems make users' tasks more efficient. When users are more efficient, the company as a whole operates more economically and efficiently. Works Cited Aldrin, James F. "A Discussion of Database Management Systems." Database Monthly May 1995: 25-37. Little, Karen A. And Jeffrey W. Benson. Word Processors. Boston: Boyd Publishing Company, 1995. Wakefield, Sheila A. "What Can An Electronic Spreadsheet Do For You," PC Analyzer Apr. 1995: 98-110. f:\12000 essays\technology & computers (295)\Applications of shit in the computer.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Argumentative Essay In the summer of 1996 Gwen Jacobs enjoyed a topless summer stroll during which she was seen by a local O.P.P officer, was apprehended and subsequently charged with indecent exposure. Gwen Jacobs pleaded not guilty in court and won the right to go topless in Ontario. This incident brought up an excellent question: should women be allowed to go topless on public beaches and in other public areas? The answer is strictly no, women should not be allowed to go topless anywhere outside of their own home. One of the many reasons why I believe that women should not be allowed to go topless is with respect to the safety of women. Men and boys have, in recent years, been using short, tight, skirts and shirts as an excuse for rape or date rape. Men have said that the girl was wearing a tight shirt and short skirt and it was obvious that she was easy and wanted the attention. This statement leads me to my next point. The average human being upon first contact with a stranger bases his initial impression of that person solely on the person's appearance. This is only natural as the only thing that we know about this stranger is what we see of them the first time we meet. We all are aware of the sayings "Preppy","Jockish","Skater","Sluty" etc. This final saying, "Sluty" is interpreted by 90 percent of North Americans as a tight skirt and tight tank top which happens to be the usual ensemble of a prostitute. This first impression of a girl in nothing but a skirt and a bare chest will no doubt elevate to the new version of a "Slut" and a girl that wants it. My second point is, what kind of questions will a mother be asked by her son when he sees a half nude woman walking down the street. The first question that this child will ask is why do these women have no shirt on and you do? Your reply will be well ahhh go talk to your father. This dilemma will no doubt be brought about as these and other questions about the sexual nature of the body will be put forth by young children. Questions that you as a parent do not feel should be answered truthfully to such a young child. My third point begins thousands of years ago when man first walked on the earth. When man first walked he hunted and his wife(clothless) cleaned the game and took care of the young. As centuries have progressed women have stepped forth into a new era of equal rights. We've seen the first women doctors, astronauts, business owners and many other firsts in numerous professions. Women have made giant leaps when it comes to respect from men in their professional field. This respect which women have been fighting for over the past century, is on the verge of collapse. Women seem to be taking this new law allowing them to go topless to an extreme. Walking their dogs, walking on the beach and strolling through public places with no tops on. This display of nudity, in the average person's eyes, whether they admit to it or not, will cause men to look down again on women. If, for example, the first woman astronaut (Sally Ride) were to start going topless in public places it would be plastered on the front page of every newspaper. This in turn would lead to her fellow colleagues looking down on her. This would be a giant step backwards in respect to equal rights for women. Following the changes to this law allowing women to go topless our cities will slowly begin to diverge into places that encourage nudity and places that do not encourage nudity. Our economy will begin to collapse, as store owners appalled by this nudity will be forced to close their stores and move, if this nudity is surrounding them. This also applies to stores that want to have workers that want to go topless, they will be forced to relocate to places of nudity. As this begins to happen slowly our cities will become two sided and our economy's stability will collapse beneath our feet. An excellent example of this situation is taking place in Quebec. A law in Quebec states that a women may work in nothing less than lingerie. So a Quebec barber shop run by a well endowed women decided to charge an extra ten dollars per haircut and she'd remove her shirt so they could watch her cut their hair in just a bra. She also charged an extra fifteen to remove her bottoms so she had only her underwear on. This new business skyrocketed and now there is currently 15 of these hair dressers presently in Quebec. The neighborhoods surrounding these barbershops are appalled by what is going on and many people have relocated there families away from this nudity. In conclusion to the question: should women be allowed to go topless in public places? It has been clearly shown that women should not be allowed to go topless anywhere outside of their own home. f:\12000 essays\technology & computers (295)\Artificial Inteligence.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ABSTRACT Current neural network technology is the most progressive of the artificial intelligence systems today. Applications of neural networks have made the transition from laboratory curiosities to large, successful commercial applications. To enhance the security of automated financial transactions, current technologies in both speech recognition and handwriting recognition are likely ready for mass integration into financial institutions. RESEARCH PROJECT TABLE OF CONTENTS Introduction 1 Purpose 1 Source of Information 1 Authorization 1 Overview 2 The First Steps 3 Computer-Synthesized Senses 4 Visual Recognition 4 Current Research 5 Computer-Aided Voice Recognition 6 Current Applications 7 Optical Character Recognition 8 Conclusion 9 Recommendations 10 Bibiography 11 INTRODUCTION · Purpose The purpose of this study is to determine additional areas where artificial intelligence technology may be applied for positive identifications of individuals during financial transactions, such as automated banking transactions, telephone transactions , and home banking activities. This study focuses on academic research in neural network technology . This study was funded by the Banking Commission in its effort to deter fraud. Overview Recently, the thrust of studies into practical applications for artificial intelligence have focused on exploiting the expectations of both expert systems and neural network computers. In the artificial intelligence community, the proponents of expert systems have approached the challenge of simulating intelligence differently than their counterpart proponents of neural networks. Expert systems contain the coded knowledge of a human expert in a field; this knowledge takes the form of "if-then" rules. The problem with this approach is that people don't always know why they do what they do. And even when they can express this knowledge, it is not easily translated into usable computer code. Also, expert systems are usually bound by a rigid set of inflexible rules which do not change with experience gained by trail and error. In contrast, neural networks are designed around the structure of a biological model of the brain. Neural networks are composed of simple components called "neurons" each having simple tasks, and simultaneously communicating with each other by complex interconnections. As Herb Brody states, "Neural networks do not require an explicit set of rules. The network - rather like a child - makes up its own rules that match the data it receives to the result it's told is correct" (42). Impossible to achieve in expert systems, this ability to learn by example is the characteristic of neural networks that makes them best suited to simulate human behavior. Computer scientists have exploited this system characteristic to achieve breakthroughs in computer vision, speech recognition, and optical character recognition. Figure 1 illustrates the knowledge structures of neural networks as compared to expert systems and standard computer programs. Neural networks restructure their knowledge base at each step in the learning process. This paper focuses on neural network technologies which have the potential to increase security for financial transactions. Much of the technology is currently in the research phase and has yet to produce a commercially available product, such as visual recognition applications. Other applications are a multimillion dollar industry and the products are well known, like Sprint Telephone's voice activated telephone calling system. In the Sprint system the neural network positively recognizes the caller's voice, thereby authorizing activation of his calling account. The First Steps The study of the brain was once limited to the study of living tissue. Any attempts at an electronic simulation were brushed aside by the neurobiologist community as abstract conceptions that bore little relationship to reality. This was partially due to the over-excitement in the 1950's and 1960's for networks that could recognize some patterns, but were limited in their learning abilities because of hardware limitations. In the 1990's computer simulations of brain functions are gaining respect as the simulations increase their abilities to predict the behavior of the nervous system. This respect is illustrated by the fact that many neurobiologists are increasingly moving toward neural network type simulations. One such neurobiologist, Sejnowski, introduced a three-layer net which has made some excellent predictions about how biological systems behave. Figure 2 illustrates this network consisting of three layers, in which a middle layer of units connects the input and output layers. When the network is given an input, it sends signals through the middle layer which checks for correct output. An algorithm used in the middle layer reduces errors by strengthening or weakening connections in the network. This system, in which the system learns to adapt to the changing conditions, is called back-propagation. The value of Sejnowski's network is illustrated by an experiment by Richard Andersen at the Massachusetts Institute of Technology. Andersen's team spent years researching the neurons monkeys use to locate an object in space (Dreyfus and Dreyfus 42-61). Anderson decided to use a neural network to replicate the findings from their research. They "trained" the neural network to locate objects by retina and eye position, then observed the middle layer to see how it responded to the input. The result was nearly identical to what they found in their experiments with monkeys. Computer-Synthesized Senses · Visual Recognition The ability of a computer to distinguish one customer from another is not yet a reality. But, recent breakthroughs in neural network visual technology are bringing us closer to the time when computers will positively identify a person. · Current Research Studying the retina of the eye is the focus of research by two professors at the California Institute of Technology, Misha A. Mahowald and Carver Mead. Their objective is to electronically mimic the function of the retina of the human eye. Previous research in this field consisted of processing the absolute value of the illumination at each point on an object, and required a very powerful computer.(Thompson 249-250). The analysis required measurements be taken over a massive number of sample locations on the object, and so, it required the computing power of a massive digital computer to analyze the data. The professors believe that to replicate the function of the human retina they can use a neural network modeled with a similar biological structure of the eye, rather than simply using massive computer power. Their chip utilizes an analog computer which is less powerful than the previous digital computers. They compensated for the reduced computing power by employing a far more sophisticated neural network to interpret the signals from the electronic eye. They modeled the network in their silicon chip based on the top three layers of the retina which are the best understood portions of the eye.(250) These are the photoreceptors, horizontal cells, and bipolar cells. The electronic photoreceptors, which make up the first layer, are like the rod and cone cells in the eye. Their job is to accept incoming light and transform it into electrical signals. In the second layer, horizontal cells use a neural network technique by interconnecting the horizontal cells and the bipolar cells of the third layer. The connected cells then evaluate the estimated reliability of the other cells and give a weighted average of the potentials of the cells around it. Nearby cells are given the most weight and far cells less weight.(251) This technique is very important to this process because of the dynamic nature of image processing. If the image is accepted without testing its probable accuracy, the likelihood of image distortion would increase as the image changed. The silicon chip that the two professors developed contains about 2,500 pixels- photoreceptors and their associated image-processing circuitry. The chip has circuitry that allows a professor to focus on each pixel individually or to observe the whole scene on a monitor. The professors stated in their paper, "The behavior of the adaptive retina is remarkably similar to that of biological systems" (qtd in Thompon 251). The retina was first tested by changing the light intensity of just one single pixel while the intensity of the surrounding cells was kept at a constant level. The design of the neural network caused the response of the surrounding pixels to react in the same manner as in biological retinas. They state that, "In digital systems, data and computational operations must be converted into binary code, a process that requires about 10,000 digital voltage changes per operation. Analog devices carry out the same operation in one step and so decrease the power consumption of silicon circuits by a factor of about 10,000" (qtd in Thompson 251). Besides validating their neural network, the accuracy of this silicon chip displays the usefulness of analog computing despite the assumption that only digital computing can provide the accuracy necessary for the processing of information. As close as these systems come to imitating their biological counterparts, they still have a long way to go. For a computer to identify more complex shapes, e. g., a person's face, the professors estimate the requirement would be at least 100 times more pixels as well as additional circuits that mimic the movement-sensitive and edge-enhancing functions of the eye. They feel it is possible to achieve this number of pixels in the near future. When it does arrive, the new technology will likely be capable of recognizing human faces. Visual recognition would have an undeniable effect on reducing crime in automated financial transactions. Future technology breakthroughs will bring visual recognition closer to the recognition of individuals, thereby enhancing the security of automated financial transactions. · Computer-Aided Voice Recognition Voice recognition is another area that has been the subject of neural network research. Researchers have long been interested in developing an accurate computer-based system capable of understanding human speech as well as accurately identifying one speaker from another. · Current Research Ben Yuhas, a computer engineer at John Hopkins University, has developed a promising system for understanding speech and identifying voices that utilizes the power of neural networks. Previous attempts at this task have yielded systems that are capable of recognizing up to 10,000 words, but only when each word is spoken slowly in an otherwise silent setting. This type of system is easily confused by back ground noise (Moyne 100). Ben Yuhas' theory is based on the notion that understanding human speech is aided, to some small degree, by reading lips while trying to listen. The emphasis on lip reading is thought to increase as the surrounding noise levels increase. This theory has been applied to speech recognition by adding a system that allows the computer to view the speaker's lips through a video analysis system while hearing the speech. The computer, through the neural network, can learn from its mistakes through a training session. Looking at silent video stills of people saying each individual vowel, the network developed a series of images of the different mouth, lip, teeth, and tongue positions. It then compared the video images with the possible sound frequencies and guessed which combination was best. Yuhas then combined the video recognition with the speech recognition systems and input a video frame along with speech that had background noise. The system then estimated the possible sound frequencies from the video and combined the estimates with the actual sound signals. After about 500 trial runs the system was as proficient as a human looking at the same video sequences. This combination of speech recognition and video imaging substantially increases the security factor by not only recognizing a large vocabulary, but also by identifying the individual customer using the system. · Current Applications Laboratory advances like Ben Yuhas' have already created a steadily increasing market in speech recognition. Speech recognition products are expected to break the billion-dollar sales mark this year for the first time. Only three years ago, speech recognition products sold less than $200 million (Shaffer, 238). Systems currently on the market include voice-activated dialing for cellular phones, made secure by their recognition and authorization of a single approved caller. International telephone companies such as Sprint are using similar voice recognition systems. Integrated Speech Solution in Massachusetts is investigating speech applications which can take orders for mutual funds prospectuses and account activities (239). · Optical Character Recognition Another potential area for transaction security is in the identification of handwriting by optical character recognition systems (OCR). In conventional OCR systems the program matches each letter in a scanned document with a pre-arranged template stored in memory. Most OCR systems are designed specifically for reading forms which are produced for that purpose. Other systems can achieve good results with machine printed text in almost all font styles. However, none of the systems is capable of recognizing handwritten characters. This is because every person writes differently. Nestor, a company based in Providence, Rhode Island has developed handwriting recognition products based on developments in neural network computers. Their system, NestorReader, recognizes handwritten characters by extracting data sets, or feature vectors, from each character. The system processes the input representations using a collection of three by three pixel edge templates (Pennisi, 23). The system then lays a grid over the pixel array and pieces it together to form a letter. Then the network discovers which letter the feature vector most closely matched. The system can learn through trial and error, and it has an accuracy of about 80 percent. Eventually this system will be able to evaluate all symbols with equal accuracy. It is possible to implement new neural-network based OCR systems into standard large optical systems. Those older systems, used for automated processing of forms and documents, are limited to reading typed block letters. When added to these systems, neural networks improve accuracy of reading not only typed letters but also handwritten characters. Along with automated form processing, neural networks will analyze signatures for possible forgeries. Conclusion Neural networks are still considered emerging technology and have a long way to go toward achieving their goals. This is certainly true for financial transaction security. But with the current capabilities, neural networks can certainly assist humans in complex tasks where large amounts of data need to be analyzed. For visual recognition of individual customers, neural networks are still in the simple pattern matching stages and will need more development before commercially acceptable products are available. Speech recognition, on the other hand, is already a huge industry with customers ranging from individual computer users to international telephone companies. For security, voice recognition could be an added link to the chain of pre-established systems. For example, automated account inquiry, by telephone, is a popular method for customers to determine the status of existing accounts. With voice identification of customers, an option could be added for a customer to request account transactions and payments to other institutions. For credit card fraud detection, banks have relied on computers to identify suspicious transactions. In fraud detection, these programs look for sudden changes in spending patterns such as large cash withdrawals or erratic spending. The drawback to this approach is that there are more accounts flagged for possible fraud than there are investigators. The number of flags could be dramatically reduced with optical character recognition to help focus investigative efforts. It is expected that the upcoming neural network chips and add-on boards from Intel will add blinding speed to the current network software. These systems will even further reduce losses due to fraud by enabling more data to be processed more quickly and with greater accuracy. Recommendations Breakthroughs in neural network technology have already created many new applications in financial transaction security. Currently, neural network applications focus on processing data such as loan applications, and flagging possible loan risks. As computer hardware speed increases and as neural networks get smarter, "real-time" neural network applications should become a reality. "Real-time" processing means the network processes the transactions as they occur. In the mean time, 1. Watch for advances in visual recognition hardware / neural networks. When available, commercially produced visual recognition systems will greatly enhance the security of automated financial transactions. 2. Computer aided voice recognition is already a reality. This technology should be implemented in automated telephone account inquiries. The feasibility of adding phone transactions should also be considered. Cooperation among financial institutions could result in secure transfers of funds between banks when ordered by the customers over the telephone. 3. Handwriting recognition by OCR systems should be combined with existing check processing systems. These systems can reject checks that are possible forgeries. Investigators could follow-up on the OCR rejection by making appropriate inquiries with the check writer. BIBLIOGRAPHY Winston, Patrick. Artificial Intelligence. Menlo Park: Addison-Wesley Publishing, 1988. Welstead, Stephen. Neural Network and Fuzzy Logic in C/C++. New York: Welstead, 1994. Brody, Herb. "Computers That Learn by Doing." Technology Review August 1990: 42-49. Thompson, William. "Overturning the Category Bucket." BYTE January 1991: 249-50+. Hinton, Geoffrey. "How Neural Networks Learn from Experience." Scientific American September 1992: 145-151. Dreyfus, Hubert., and Stuart E. Dreyfus. "Why Computers May Never Think Like People." Technology Review January 1986: 42-61. Shaffer, Richard. "Computers with Ears." FORBES September 1994: 238-239. f:\12000 essays\technology & computers (295)\Artificial Intellegence.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Artificial Intellegence Identification And Description Of The Issue Over the years people have been wanting robots to become more Intelligent. In the past 50 years since computers have been around, the computer world has grown like you wouldn't believe. Robots have now been given jobs that were 15 years ago no considered to be a robots job. Robots are now part of the huge American government Agency the FBI. They are used to disarm bombs and remove dangerous products from a site without putting human life in danger. You probably don't think that when you are in a carwash that a robotic machine is cleaning your car. The truth is that they are. The robot is uses senses to tell the main computer what temperature the water should be and what style of wash the car is getting e.g. Supreme or Normal wash. Computer robots are being made, that learn from their mistakes. Computers are now creating their own programs. In the past there used to be some problems, now they are pretty much full proof. The Television and Film business has to keep up with the demands from the critics sitting back at home, they try and think of new ideas and ways in which to entertain the audiences. They have found that robotics interests people. With that have made many movies about robotics (e.g. Terminator, Star Wars, Jurassic Park ). Movie characters like the terminator would walk, talk and do actions by its self mimicking a human through the use of Artificial Intelligence. Movies and Television robots don't have Artificial Intelligence ( AI ) but are made to look like they do. This gives us the viewers a reality of robotics with AI. Understanding Of The IT Background Of The Issue Artificial Intelligence means " Behavior performed by a machine that would require some degree of intelligence if carried out by a human ". The Carwash machine has some intelligence which enables it to tell the precise temperature of the water it is spraying onto your car. If the water is to hot it could damage the paint work or even make the rubber seals on the car looser. The definition above shows that AI is present in everyday life surrounding humans where ever they go. Alan Turing Invented a way in which to test AI. This test is called the Turing Test. A computer asks a human various questions. Those conducting the test have to decide whether the human or the computer is asking the questions. Analysis Of The Impact Of The Issue With the increasing amount of robots with AI in the work place and in everyday life, it is making human jobs insecure for now and in the future. If we take a look at all the major car factories 70 years ago they were all hand crafted and machinery was used very little. Today we see companies like TOYOTA who produce mass amounts of cars with robots as the workers. This shows that human workmanship is required less and less needed. This is bad for the workers because they will then have no jobs and will be on the unemployment benefit or trying to find a new job. The advantage of robots is that they don't need a coffee break or need to have time of work. The company owns the machinery and therefore they have control over the robot. Solutions To Problems Arising From The Issue Some problems arising from the issue would include job loss, due to robots taking the place of humans in the work place. This could be resolved by educating the workers to do other necessary jobs in the production line. Many of the workers will still keep their other jobs that machines can't do. If robots became to intelligent this could be a huge disaster for human kind. We might end up being second best to robots. They would have the power to do anything and could eliminate humans from the planet especially if they are able to programme themselves without human help. I think the chance of this happening is slim but it is a possibility. f:\12000 essays\technology & computers (295)\Asts ADVANTAGE 9312 Communicator.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ AST's Advantage 9312 (Communicator) Advantage 9312 THE COMMUNICATOR The Communicator is AST's newest addition to the Advantage line of personal computers. The Advantage 9312 comes with a 28.8 kbps DSVD modem (Digital Simultaneous Voice and Data), Digital Camera, and a varitiy of software programs that let you interact with friends, family, or people on the otherside of the planet. This is where the 9312 picked-up its nickname the "Communicator". The modem and its phone capabilities are what truely sets this computer apart from the rest of the personal computers. The 28.8kbps DSVD modem lets you talk on the phone at the same time you're using the modem features. It also works with the Intel analog video camera, which plugs into a video-capture card in the pc and can transmit pictures at up to 12 frames per second. And it also comes with Intel's Video Phone software, which lets you use that camera to see both yourself and the person at the other end of the line on the screen (this is assuming your conversation partner has the same type of hook-up. Another advantage of the Communicator is AST's LifeLine which is a standard component of their technical support. This simultaneous telephone and data support, which makes use of Radish Communications' TalkShop software and the Advantage's DSVD modem, allows technicians to take information directly from your computer as soon as you authorize them to do so. Which means no more reading line after line of cumbersome configuration files, instead, the technician can download the files directly from your computer, make the appropriate changes and return them to your system in just a few seconds. The Advantage 9312 uses a 166-Mhz Pentium processor which makes it fast and reliable. The 166Mhz processor has 64-bit Data bus and is capiable of dynamic branch prediction, data integrity, error detection, multiprocessing support and performance monitoring. The 166 also has 4GB of physical address space and its clock speed range from 60 MHz to 120 MHz. The storage system comes with a 1.44MB, 3,5" floppy drive and 2.5 GB hard drive which should give the user enough space , but if not an addtional hard drive can be added to the unit. Twenty four meg. of EDO RAM is used to allow large programs to be brought up with easy and speed. There is 256KB of external cache. The multimedia package has a 8x speed IDE CD-Rom thats backed up with a 16 bit Sound Blaster card, 3D sound wavetable , and amplified stereo speakers that are controlled by remote control, along with video MPEG playback, and a microphone. Graphic are supported with 1MB of graphic memory, a 64 bit local bus SVGS graphic's card and is capiable of a resolution up to 1280 x 1064 x 16. Included in the package is one infared remote control and receiver, video capture and T.V. tuner card, and one analog video camera with all this the Communicator is sure to be around for awhile. The 9312 has two 32 bit ISA compatible I/O slots and five 16bit ISA compatible I/O slots. The interface has two serial ports, one parallel port, one PS/2 compatible mouse port, one analog VGA connector, and one keyboard port. A full duplex speaker phone utillizes the 28.8Kbps DSVD data/ fax/ voice to set this modem apart from the rest. Which is a big plus if your using the InterNet lot and you don't have a dedicated phone line. The DSVD makes it possible to do both at the same time, talk on the telephone and manuver around on the InterNet. The accessories include a high resolution, two button mouse ,Winows 95 keyboard, and thirty-one different software tiltes; which range from early learning for kids,to Lotus and Quicken, to Proidgy. This system is topped off with AST's LifeLine voice and data technical support and a free one year, on-site warranty. Technical Specifications Processor: 166MHz Intel Pentium processor Cache: 256KB external cache Memory: 24MB EDO RAM Storage: 2.5GB hard drive One 1.44MB, 3.5" floppy drive Multimedia: 8x speed IDE CD- ROM 16-bit Sound Blaster card 3D sound with Hardware Wavetable MPEG playback Amplified stereo speakers Microphone Graphics: 1MB graphics memory 64-bit local bus SVGA graphic Resolutions up to 1280 x 1064 x 16 Modem: 28.8 kbps DSVD data/ fax/ voice modem Full duplex speaker phone I/O: Two 32-bit PCI compatible slots Five 16-bit ISA compatible slots Interfaces: Two serial ports One parellel port One PS/2 compatible mouse port One analog VGA connector One keyboard port Accessories: High-resolution, two-button mouse Windows 95 keyboard ` f:\12000 essays\technology & computers (295)\Battle of the Bytes Windows95 vs Macs.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Battle of the Bytes Macintosh vs. Windows 95 It used to be that the choice between a Mac and a PC was pretty clear. If you wanted to go for the more expensive, easier to use, and better graphics and sound, you went to buy a Macintosh, for the cheaper price, it was the PC. Now it is a much different show. With the release of Windows 95 and the dynamics of the hardware market have changed the equation. On the other hand, Apple has made great price reductions on many of their computers last October. You can now buy a reasonably equipped Power Macintosh at about the same price as a PC that has about the same things. This makes the competition much harder. Windows 3.x have been great improvements over the earlier versions of Windows, and of course over DOS, but it still didn't compete against the ease of use on a Mac. The Windows 95 interface is much better than Windows 3.x. It borrows some from the Macintosh interface and has improved on it. Some improvements are the ability to work with folder icons that represent directories and subdirectories in DOS. Windows 95, unlike the Mac, logically groups data and resources. A Taskbar menu lets you call up and switch between any software application at any time. Thus feature is better than the Mac's because its use is more obvious. It clearly shows what is running and allows you to switch programs with a single click of the mouse. Control panels have been added so you can configure your hardware. There is easy access to frequently used files. You can make very long file names on Windows 95 instead of short and strange names that leave you wondering about, such as on Windows 3.x I could not name a folder This is stuff for school it must be a lot shorter. The Help system helps you implement its suggestions. A multilevel Undo command for all file operations safeguards your work, something Macintosh does not have. Something that Windows 95 has, similar to the Macintosh Alias function, is shortcut icons. It calls up a program very easily, instead of searching through your hard drive. The Windows 95 shortcuts go beyond the Mac's, they can refer to data inside documents as well as to files and folders, and can also call up information on a local area network server or Internet site. Windows 95's plug and play system allows the operating system to read what's on your machine and automatically configure your new software that you need to install, however, this only works if the added hardware is designed to support it, and it will for a majority of hardware. All these things are major improvements, but hardware and CONFIG.SYS settings left over from earlier programs can conflict with the new system, causing your hard drive to crash. This is something all users of Windows 95 will dread. Even though Microsoft has made many wonderful changes to Windows, Apple is working on developing a new operation system, called Copland. It may beat many of the Windows 95 improvements. Apple is still deciding on what new things to add when the system will start shipping later in the year. Some new things may be a customizable user interface and features such as drawers, built-in indexing and automatically updated search templates to help users manger their hard drives much more efficiently. The biggest improvement is to be able to network systems from multiple vendors running multiple operating systems. Like Windows 95, Copland will also have a single in-box for fax, e-mail, and other communications. The disadvantage of Copland is it can only be used on Power Macintoshes. I would personally go for a PC with Windows 95. I choose it because of the many programs that can be used on PC's. Whenever I walk into a computer store, such as Electronics Boutique, half of the store is taken up by programs that can be used on an IBM compatible PC. There is only one little shelf for things that run on Macs. It seems that the more people use PC's. I have met very few people with a Macintosh. I can bring many things from my computers to theirs and the other way around without worrying, "What if I need to find this for a Mac?" Schools should use Windows95 PC's because of the many more educational programs available for PC's. Since of the making of Windows 95 many companies now make programs for the PC. It may be a long time, if ever, that they will decide to make it for a Mac. Plus since of the many people with IBM PC's at home, people can bring their work to and from school. If everyone had the same kind of computer on a network, students could go into the computers at schools all over the world to use programs there. So since now that the quality of computers are equal it is very hard to make your decision. For those that are not computer literate, the best thing to do is to go for the Mac because of the easiness involved in using one. This means you get less choice of programs in a store, and if you go online, many people will be using something different from you so you have no idea what they are talking about. If you know how a computer is basically used, a Windows 95 PC will be no problem. It doesn't take that long to learn. You will have a bigger choice of programs and may be able to do more things with other people that have a computer. It comes down to this choice. Most of the choosing will go to schools because of the many using Macintosh computers, which most of Apple's money comes from. It is only recently companies that made software for PC's that got interested in making programs for educational purposes. So if you are deciding a computer. I leave you to decide this. Windows 95 or Macintosh, the choice is yours. I feel that this is the best journal entry I have ever written. It informs the reader a great deal about the subject and it helps you make a decision that is very important if you decide to buy a computer for work or home use. It is very helpful because it can educate people in the world that are not computer literate in a world that is being taken over by computers. Things such as the internet are used by many people, and it would certainly help if you needed to know what kind to buy so your would be compatible with someone else's. This entry tells that I am one that is around computers a lot and have an interest in them. f:\12000 essays\technology & computers (295)\Beyaunt force.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Buoyant Force The purpose of this lab is to calculate bouyant forces of objects submerged in water. The first step in the lab was to measure the mass of a metal cylinder, which was found to be 100g, and then to calculated it's weight, which was .98 newtons. Then next step was to measure the apparent weight of the cylinder when it is completely submerged in a bath of water using the formula Wa=ma*g , this was found to be 88.5grams. Knowing these two numbers, the bouyant force that the water places on the object can be calculated using the formula Fb=W-Wa , Wa=.8673n W=.98n Fb=.1127n Part 2 of this lab consisted of weighing an empty cup, which was 44grams. And then filling another cup up to a certain point the if any more water was added, it would spill out of a little opening in the cup, the water spilled out could be caught in the first cup. This is done so that the water spilled out can be weighed and compared to a calculated weight of which the water should be. After filling the cup, the cylinder was put into the cup , allowing the water to spill out and be caught in the first cup. After the water had spilled out it was weighed, which was 8.3g, converted to kg was .0083g. The weight of this displaced water in Newtons was 0.081423n. The percentage error with the buoyant force from step one was calculated using , this resulted, using .114 for Fb and .0813 for Wdisp, a 28.7% error. After completing this lab, it has become more apparent as to how to calculate boyant forces and how that information can be used. Buoyant Forces f:\12000 essays\technology & computers (295)\Bill Gates biography.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ William H. Gates Chairman and Chief Executive Officer Microsoft Corporation William (Bill) H. Gates is chairman and chief executive officer of Microsoft Corporation, the leading provider, worldwide, of software for the personal computer. Microsoft had revenues of $8.6 billion for the fiscal year ending June 1996, and employs more than 20,000 people in 48 countries. Born on October 28, 1955, Gates and his two sisters grew up in Seattle. Their father, William H. Gates II, is a Seattle attorney. Their late mother, Mary Gates, was a schoolteacher, University of Washington regent and chairwoman of United Way International. Gates attended public elementary school and the private Lakeside School. There, he began his career in personal computer software, programming computers at age 13. In 1973, Gates entered Harvard University as a freshman, where he lived down the hall from Steve Ballmer, now Microsoft's executive vice president for sales and support. While at Harvard, Gates developed the programming language BASIC for the first microcomputer -- the MITS Altair. In his junior year, Gates dropped out of Harvard to devote his energies to Microsoft, a company he had begun in 1975 with Paul Allen. Guided by a belief that the personal computer would be a valuable tool on every office desktop and in every home, they began developing software for personal computers. Gates' foresight and vision regarding personal computing have been central to the success of Microsoft and the software industry. Gates is actively involved in key management and strategic decisions at Microsoft, and plays an important role in the technical development of new products. A significant portion of his time is devoted to meeting with customers and staying in contact with Microsoft employees around the world through e-mail. Under Gates' leadership, Microsoft's mission is to continually advance and improve software technology and to make it easier, more cost-effective and more enjoyable for people to use computers. The company is committed to a long-term view, reflected in its investment of more than $2 billion on research and development in the current fiscal year. As of December 12, 1996, Gates' Microsoft stock holdings totaled 282,217,980 shares. In 1995, Gates wrote The Road Ahead, his vision of where information technology will take society. Co-authored by Nathan Myhrvold, Microsoft's chief technology officer, and Peter Rinearson, The Road Ahead held the No. 1 spot on the New York Times' bestseller list for seven weeks. Published in the U.S. by Viking, the book was on the NYT list for a total of 18 weeks. Published in more than 20 countries, the book sold more than 400,000 copies in China alone. In 1996, while redeploying Microsoft around the Internet, Gates thoroughly revised The Road Ahead to reflect his view that interactive networks are a major milestone in human history. The paperback second edition has also become a bestseller. Gates is donating his proceeds from the book to a non-profit fund that supports teachers worldwide who are incorporating computers into their classrooms. In addition to his passion for computers, Gates is interested in biotechnology. He sits on the board of the Icos Corporation and is a shareholder in Darwin Molecular, a subsidiary of British-based Chiroscience. He also founded Corbis Corporation, which is developing one of the largest resources of visual information in the world-a comprehensive digital archive of art and photography from public and private collections around the globe. Gates also has invested with cellular telephone pioneer Craig McCaw in Teledesic, a company that is working on an ambitious plan to launch hundreds of low-orbit satellites around the globe to provide worldwide two-way broadband telecommunications service. In the decade since Microsoft has gone public, Gates has donated more than $270 million to charities, including $200 million to the William H. Gates Foundation. The focus of Gates' giving is in three areas: education, population issues and access to technology. Gates was married on Jan. 1, 1994 to Melinda French Gates. They have one child, Jennifer Katharine Gates, born in 1996. Gates is an avid reader and enjoys playing golf and bridge. f:\12000 essays\technology & computers (295)\Bootlog in Standard Unix.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ [boot] LoadStart = system.drv LoadSuccess = system.drv LoadStart = keyboard.drv LoadSuccess = keyboard.drv LoadStart = mscmouse.drv LoadSuccess = mscmouse.drv LoadStart = vga.drv LoadSuccess = vga.drv LoadStart = mmsound.drv LoadSuccess = mmsound.drv LoadStart = comm.drv LoadSuccess = comm.drv LoadStart = vgasys.fon LoadSuccess = vgasys.fon LoadStart = vgaoem.fon LoadSuccess = vgaoem.fon LoadStart = GDI.EXE LoadStart = FONTS.FON LoadSuccess = FONTS.FON LoadStart = vgafix.fon LoadSuccess = vgafix.fon LoadStart = OEMFONTS.FON LoadSuccess = OEMFONTS.FON LoadSuccess = GDI.EXE LoadStart = USER.EXE INIT=Keyboard INITDONE=Keyboard INIT=Mouse STATUS=Mouse driver installed INITDONE=Mouse INIT=Display LoadStart = DISPLAY.drv LoadSuccess = DISPLAY.drv INITDONE=Display INIT=Display Resources INITDONE=Display Resources INIT=Fonts INITDONE=Fonts INIT=Lang Driver INITDONE=Lang Driver LoadSuccess = USER.EXE LoadStart = setup.exe LoadStart = LZEXPAND.DLL LoadSuccess = LZEXPAND.DLL LoadStart = VER.DLL LoadSuccess = VER.DLL LoadSuccess = setup.exe INIT=Final USER INITDONE=Final USER INIT=Installable Drivers INITDONE=Installable Drivers f:\12000 essays\technology & computers (295)\Bugged.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Bugged In our high tech world, what was once a complicated electronic task is no longer such a big deal. I'm talking about "bugging". No, I don't mean annoying people; I mean planting electronic listening devices for the purpose of eavesdropping. Bugging an office is a relatively simple process if one follows a few basic steps. First, a person needs to select the bug. There are many different types of bugs ranging from the infinity bug with which you can listen in on a telephone conversation from over 200 miles away to an electaronic laser beam which can pick up the vibrations of a person's voice off a window pane. The infinity bug sells for $1,000 on the black market and the laser for $895. Both, however, are illegal. Second, one needs to know where to plant the bug. A bug can be hidden in a telphone handset, in the back of a desk drawer, etc. The important thing to remember is to place the bug in a spot near where people are likely to talk. The bug may be useless if it is planted too far away from conversations take place. Last one needs to know how to plant the bug. One of the most common ways is to wire a 9-volt battery to the phone's own microphone and attaching it to a spare set of wires that the phone lines normally contain. This connection enables the phone to be live on the hook, sending continuous room sounds to the eavesdropper. It used to be that hidden microphones and concealed tape recorders were strictly for cops and spies. Today such gadgets have filtered down to the jealous spouse, the nosy neighbor, the high-level executive, and the local politician. f:\12000 essays\technology & computers (295)\Business 5000.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ INTEL Knows Best? A Major Marketing Mistake Problem Statement When Thomas Nicely, a mathematician at Lynchburg College in Virginia, first went public with the fact that Intel's new Pentium chip was defective Intel admitted to the fact that it had sold millions of defective chips, and had known about the defective chips for over four months. Intel said its reasoning for not going public was that most people would never encounter any problems with the chip. Intel said that a spreadsheet user doing random calculations would only have a problem every 27,000 years, therefore they saw no reason to replace all of the defective chips. However if a user possessed a defective chip and could convince Intel that his or her calculations were particularly vulnerable to the flaw in the defective chip then Intel it would supply those people with a new chip. This attitude of 'father knows best' fostered by Intel created an uproar among users and owners of the defective chips. Six weeks after Mr. Nicely went public, IBM, a major purchaser of Pentium chips, stopped all shipments of computers containing the defective Pentium chips. Intel's stock dropped 5% following this bold move by IBM. IBM's main contention was that it puts its customers first, and Intel was failing to do this. Intel's handling of this defective chip situation gives rise to many questions. During the course of this paper I will address several of them. The first of which is how did a company with such a stellar reputation for consumer satisfaction fall into the trap that the customer does not know best? Secondly, what made this chip defect more of a public issue than other defective products manufactured and sold to the public in the past? Finally, how did Intel recover from such a mistake? How much did it cost them and what lessons can other companies learn from Intel's marketing blunder so that they do not make the same mistake? Major Findings Intel is spearheaded by a chief executive named Andrew Grove. Grove is a "tightly wound engineering Ph.D. who has molded the company in his image. Both the secret of his success and the source of his current dilemma is an anxious management philosophy built around the motto 'Only the paranoid survive'." However, even with this type of philosophy the resulting dominance he has achieved in the computer arena cannot be overlooked. Intel practically dominates the computer market with $11.5 billion in sales. Intel has over 70% of the $11 billion microprocessor market, while it's Pentium and 486 chips basically control the IBM-compatible PC market. All of these factors have resulted in an envious 56% profit margin that only Intel can seem to achieve. So what did Intel do to achieve this sort of profit margin? In mid-1994 Intel launched a $150m marketing campaign aimed at getting consumers to recognize the Pentium name and the "Intel Inside" logo. In order to achieve this goal of brand recognition Intel advertised its own name in conjunction with the "Intel Inside" logo and stated 'with Intel Inside, you know you have got. . . unparalleled quality'. This provided immediate name recognition for the company and led the consumers to associate Intel with high quality computers. Then Intel went the extra mile in the marketing world and spent another $80m to promote its new Pentium chips. The basis for this extra $80m was to "speed the market's acceptance of the new chip". The marketing campaign was a success. Intel had managed to achieve brand recognition. "Once the products were branded, companies found that they could generate even higher sales by advertising the benefits of their products. This advertising led consumers to regard brands as having very human personality traits, with one proving fundamental to brand longevity -- trustworthiness." Consumers readily identified a quality, up to date computer as one with a Pentium chip and the 'Intel Inside' logo stamped on the front. This "push" marketing strategy of Intel totally dominated the market, thus forcing the Pentium chip to the forefront of the computer market, all at the expense of the cheaper 486. This "push strategy" of Intel made it plainly clear to its purchasers that Intel was looking out for number one first and its purchasers such as Compaq and IBM second. Making the Pentium chip the mainstay of the computer industry was the goal of Intel, but a goal that would later come back to haunt them for a brief period of time. Throughout the history of the computer industry many manufacturers have sold defective products. According to Forbes journalist Andrew Kessler, "Every piece of hardware and software ever shipped had a bug in it. You better get used to it." Whether or not 'every' piece ever shipped has had a bug is debatable, but there have been numerous examples of valid software bugs. For example Quicken 3.0 had a bug that resulted in the capitalizing of the second letter of a name incorrectly. Intuit, however, handled the situation by selling an upgraded version (Quicken 4.0) which fixed the problem, and left the consumer feeling as though he or she had gotten an upgraded version of the existing program. In essence Intuit had not labeled the upgrade as a debugging program, therefore it had fixed the problem and satisfied the customer all at the same time. While Intuit's customers were feeling as though they had a better product by buying the upgrade, Intuit was padding its pocket books through all of the upgrade sales. Other examples of companies standing behind their products are in the news week after week. Just a few years ago Saturn, the GM subsidiary, sent thousands of cars to the junkyards for scrap metal due to corroded engines, a result of contaminated engine coolant. Johnson & Johnson, the maker of Tylenol, recalled every bottle of medicine carrying the Tylenol name and offered a 100% money back guarantee to anyone who had purchased a bottle that might be contaminated. The precedence was already set, so why would a company with the reputation of Intel fail to immediately replace all of the defective chips it had sold? Furthermore, why did Intel not come forth immediately when it first discovered that its chips had a problem? Intel's engineers said that the defective chips would affect only one-tenth of 1% of all users, and those users would be doing floating-point operations. (Floating point operations utilize a matrix of precomputed values, similar to those found in the back of your 1040 tax booklet. If the values in the table are correct then you will come up with a correct answer. This was not the case with the Pentium. A table containing 1066 entries had five incorrect entries, resulting in certain calculations made by the Pentium chips to be inaccurate as high as the fourth significant digit.) Considering the low number of people that the chip would supposedly affect and the high cost ($475m) associated with replacing the chips, Intel decided a case by case replacement policy "for those limited users doing critical calculations". Intel's VP-corporate marketing director, Dennis Carter, stated, "We're satisfied that it's addressing the real problem. From a customer relations standpoint, this is clearly new territory for us. A recall would be disruptive for PC users and not the right thing to for the consumer". This policy infuriated the millions of Pentium purchasers who had bought a PC with a Pentium chip. Word spread like wildfire throughout the consumer world that Intel had sold a defective product and was now refusing to replace it. This selective replacement policy is a "classic example of a product driven company that feels its technical expertise is more important than buyers' feelings". Intel was faced with a decision. Should they take the attitude of brand is most important and we will take all necessary action to preserve it or take the attitude of what would be the monetary cost of doing the right thing and replacing all of the defective chips, and would it be worth it? Initially they decided that the monetary cost of replacing all defective chip would not be cost efficient due to the sheer numbers involved. Intel had sold an estimated 4.5 million Pentium chips worldwide, and approximately 1.9 million in the U.S. alone. Intel later reversed its selective replacement policy (Intel knows best attitude) and came out with a 100% replacement policy. What was the reasoning behind this change of attitude at Intel? As a result of the selective replacement policy, IBM announced it would stop all shipments of PCs containing the flawed chips. This combined with the public outcry at having spent thousands of dollars for PCs that did not work as advertised, and the reluctance of corporate users of PCs to purchase new computers resulted in Intel changing its public policy concerning the defective chips. Intel's new policy was to offer a 100% replacement policy to anyone who desired a new chip. This policy entailed either sending replacement chips to those users who wanted to replace the chip themselves, or providing free professional replacement of the chip for those who did not feel comfortable doing it themselves. Intel's new policy was in line with public expectations, but it had been delayed for several precious weeks. So one might ask, "What did this delayed change in attitude cost Intel in terms of dollars and repeat customers?" The resulting costs to Intel were enormous in some respects, but almost negligible in others. Intel's fourth-quarter earnings were charged $475m for the costs of replacing and writing off the flawed chips. This was 15% more than analysts had predicted. Fourth-quarter profits dropped 37% to $372m. This was a sharp drop in profits, but $372m is still a number to be reckoned with in the fast paced industry of computers. So did this drop in profits mean that Intel was losing its edge? I tend to think not, since Intel reported that the sale of Pentiums had doubled between the third and fourth quarters, thus lifting revenues in 1994 to $11.5 billion, a 31% increase. Apparently consumers rallied around the new replacement policy and continued to purchase the Pentium equipped computers at a very fast rate, despite the initial reaction of Intel towards replacing the defective chips. This renewed faith was not regained overnight, but nevertheless it happened, therefore Intel is unlikely to lose its commanding lead in the industry. So what type of assurance was it that led to this renewed faith in Intel? Following Intel's announcement of its 100% replacement policy for the defective chips it recalculated its replacement policy on all future defective products. Intel realized that its "fatal flaw was adopting a 'father knows b f:\12000 essays\technology & computers (295)\business in computer.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I understand that some students that have already graduated from College are having a bit of trouble getting their new businesses started. I know of a tool that will be extremely helpful and is already available to them; the Internet. Up until a few years ago, when a student graduated they were basically thrown out into the real world with just their education and their wits. Most of the time this wasn't good enough because after three or four years of college, the perspective entrepreneur either forgot too much of what they were supposed to learn, or they just didn't have the finances. Then by the time they save sufficient money, they again had forgotten too much. I believe I have found the answer. On the Internet your students will be able to find literally thousands of links to help them with their future enterprises. In almost every city all across North America, no matter where these students move to, they are able to link up and find everything they need. They can find links like "Creative Ideas", a place they can go and retrieve ideas, innovations, inventions, patents and licensing. Once they come up with their own products, they can find free expert advice on how to market their products. There are easily accessible links to experts, analysts, consultants and business leaders to guide their way to starting up their own business, careers and lives. These experts can help push the beginners in the right direction in every field of business, including every way to generate start up revenue from better management of personal finances to diving into the stock market. When the beginner has sufficient funds to actually open their own company, they can't just expect the customers to come to them, they have to go out and attract them. This is where the Internet becomes most useful, in advertising. On the Internet, in every major consumer area in the world, there are dozens of ways to advertise. The easiest and cheapest way, is to join groups such as "Entrepreneur Weekly". These groups offer weekly newsletters sent all over the world to major and minor businesses informing them about new companies on the market. It includes everything about your business from what you make/sell and where to find you, to what your worth. These groups also advertise to the general public. The major portion of the advertising is done over the Internet, but this is good because that is their target market. By now, hopefully their business is doing well, sales are up and money is flowing in. How do they keep track of all their funds without paying for an expensive accountant? Back to the Internet. They can find lots of expert advice on where they should reinvest their money. Including how many and how qualified of staff to hire, what technical equipment to buy and even what insurance to purchase. This is where a lot of companies get into trouble, during expansion. Too many entrepreneurs try to leap right into the highly competitive mid-size company world. On the Internet, experts give their secrets on how to let their companies natural growth force its way in. This way they are more financially stable for the rough road ahead. The Internet isn't always going to give you the answers you are looking for, but it will always lead you in the right direction. That is why I hope you will accept my proposal and make aware the students of today of this invaluable business tool. f:\12000 essays\technology & computers (295)\Can Computers Think The case for and against artificial int.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Can computers think? The case for and against artificial intelligence Artificial intelligence has been the subject of many bad '80's movies and countless science fiction novels. But what happens when we seriously consider the question of computers that think. Is it possible for computers to have complex thoughts, and even emotions, like homo sapien? This paper will seek to answer that question and also look at what attempts are being made to make artificial intelligence (hereafter called AI) a reality. Before we can investigate whether or not computers can think, it is necessary to establish what exactly thinking is. Examining the three main theories is sort of like examining three religions. None offers enough support so as to effectively eliminate the possibility of the others being true. The three main theories are: 1. Thought doesn't exist; enough said. 2. Thought does exist, but is contained wholly in the brain. In other words, the actual material of the brain is capable of what we identify as thought. 3. Thought is the result of some sort of mystical phenomena involving the soul and a whole slew of other unprovable ideas. Since neither reader nor writer is a scientist, for all intents and purposes, we will say only that thought is what we (as homo sapien) experience. So what are we to consider intelligence? The most compelling argument is that intelligence is the ability to adapt to an environment. Desktop computers can, say, go to a specific WWW address. But, if the address were changed, it wouldn't know how to go about finding the new one (or even that it should). So intelligence is the ability to perform a task taking into consideration the circumstances of completing the task. So now that we have all of that out of that way, can computers think? The issue is contested as hotly among scientists as the advantages of Superman over Batman is among pre-pubescent boys. On the one hand are the scientists who say, as philosopher John Searle does, that "Programs are all syntax and no semantics." (Discover, 106) Put another way, a computer can actually achieve thought because it "merely follows rules that tell it how to shift symbols without ever understanding the meaning of those symbols." (Discover, 106) On the other side of the debate are the advocates of pandemonium, explained by Robert Wright in Time thus: "[O]ur brain subconsciously generates competing theories about the world, and only the 'winning' theory becomes part of consciousness. Is that a nearby fly or a distant airplane on the edge of your vision? Is that a baby crying or a cat meowing? By the time we become aware of such images and sounds, these debate have usually been resolved via a winner-take-all struggle. The winning theory-the one that best matches the data-has wrested control of our neurons and thus our perceptual field." (54) So, since our thought is based on previous experience, computers can eventually learn to think. The event which brought this debate in public scrutiny was Garry Kasparov, reigning chess champion of the world, competing in a six game chess match against Deep Blue, an IBM supercomputer with 32 microprocessors. Kasparov eventually won (4-2), but it raised the legitimate question, if a computer can beat the chess champion of the world at his own game (a game thought of as the ultimate thinking man's game), is there any question of AI's legitimacy? Indeed, even Kasparov said he "could feel-I could smell- a new kind of intelligence across the table." (Time, 55) But, eventually everyone, including Kasparov, realized that what amounts to nothing more than brute force, while impressive, is not thought. Deep Blue could consider 200 million moves a second. But it lacked the intuition good human players have. Fred Guterl, writing in Discover, explains. "Studies have shown that in a typical position, a strong human play considers on average only two moves. In other words, the player is choosing between two candidate moves that he intuitively recognizes, based on prior experience, as contributing to the goals of the position." Seeking to go beyond the brute force of Deep Blue in separate projects, are M.I.T. professor Rodney Brooks and computer scientist Douglas Lenat. The desire to conquer AI are where the similarities between the two end. Brooks is working on an AI being nicknamed Cog. Cog has cameras for eyes, eight 32-bit microprocessors for a brain and soon will have a skin-like membrane. Brooks is allowing Cog to learn about the world like a baby would. "It sits there waving its arm, reaching for things." (Time, 57) Brooks's hope is that by programming and reprogramming itself, Cog will make the leap to thinking. This expectation is based on what Julian Dibbell, writing in Time, describes as the "bottom-up school. Inspired more by biological structures than by logical ones, the bottom-uppers don't bother trying to write down the rules of thought. Instead, they try to conjure thought up by building lots of small, simple programs and encouraging them to interact." (57) Lenat is critical of this type of AI approach. He accuses Brooks of wandering aimlessly trying to recreate evolution. Lenat has created CYC. An AI program which uses the top down theory which states that "if you can write down the logical structures through which we comprehend the world, you're halfway to re-creating intelligence. (Time, 57) Lenat is feeding CYC common sense statements (i.e. "Bread is food.") with the hopes that it will make that leap to making its own logical deductions. Indeed, CYC can already pick a picture of a father watching his daughter learn to walk when prompted for pictures of happy people. Brooks has his own criticisms for Lenat. "Without sensory input, the program's knowledge can never really amount to more than an abstract network of symbols. So, what's the answer? The evidence points to the position that AI is possible. What is our brain but a complicated network of neurons? And what is thought but response to stimuli? How to go about achieving AI is another question entirely. All avenues should be explored. Someone is bound to hit on it. Thank you. f:\12000 essays\technology & computers (295)\Censorship on the Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Five years after the first world wide web was launched at the end of 1991, The Internet has become very popular in the United States. Although President Clinton already signed the 1996 Telecommunication ActI on Thursday Feb 8, 1996, the censorship issue on the net still remains unresolved. In fact, censorship in cyberspace is unconscionable and impossible. Trying to censor the Internet its problematic because the net is an international issue, there is no standard for judging materials, and censorship is an abridgment of democratic spirit. Firstly, censorship on the Internet is an international issue. The Internet was constructed by the U.S. military since 1960s, but no one actually owns it. Thus, the Internet is a global network, and it crosses over different cultures. It is impossible to censor everything that seems to be offensive. For example, Vietnam has announced new regulations that forbid "data that can affect national security, social order, and safety or information that is not appropriate to the culture, morality, and traditional customs of the Vietnamese people." on June 4, 1996. It is also impossible to ban all things that are prohibited in a country. For instant, some countries, such as Germany, have considered taking measures against the U.S. and other companies or individuals that have created or distributed offensive material on the Internet. If the United States government really wanted to censor the net, there is only one solution - shut down all network links of other countries. But of course that would mean no Internet access for the whole country and that is disgust by the whole nation. Secondly, everyone has their personal judgment values. The decision of some people cannot represent the whole population of those using the net. Many people debate that pornography on the net should be censored because there are kids online. However, we can see there are many kids of pornographic magazines on display at newsstands. It is because we have regulations to limit who can read certain published materials. Likewise, some people already use special software to regulate the age limit in cyberspace. Why do people still argue about that? It is all about personal points of views. Justice Douglas said, "To many the Song of Solomon is obscene. I do not think we, the judges, were ever given the constitutional power to make definitions of obscenity."II. In cyberspace, it is hard to set up a pool of judges to censor what could be displayed on the net. Thirdly, censorship works against democratic spirit, it opposes the right of free speech and is a breach of the First Amendment. Do you remember Salman Rushdie and his book The Satanic Verses? Iranian government announced a death threat to kill Rushdie and his publishers because his book speaks against Islam. No one wants that to happen again. If you are one of the Internet users, you should have seen a blue ribbon logo. The blue ribbon symbolizes a support for the essential human right of free speech. Let think about what happen if we lost the right of free speech. How can we stay online? Who gives courage to the web's designers to put their opinion on the net? On the same day when the 1996 Telecommunication Act signed in law, a bill called House Bill 1630 was introduced by Georgia House of Representatives member Don Parsons. It is so repel that this law even limits the right of choosing email addressesIII. "Freedom of speech on the Internet deserves the same protection as freedom of the press, freedom of speech, or freedom of assembly." said Bill GatesIV. In addition, information in cyberspace can be changing from second to second. If you put something on the web, everyone on the net can access it instantly. It is totally different from all traditional media. Everything on the Internet is just a combination of zero and oneV. It is very difficult to chase what has been published on the information superhighway. After President Clinton signed the 1996 Telecommunication Act, lots of net users reacted in outrage. Although the Federal court in Philadelphia and New York have overturned that Act, The government has appealed the ruling and the case has been referred to the U.S. Supreme Court. Since censorship is an international issue, people have different judgment and censorship works against the democratic spirit. Censorship in the Internet is totally unacceptable. According Justice Potter Stewart's words, "Censorship reflects a society's lack of confidence in itself. It is a hallmark of an authoritarian regime. Long ago those who wrote our First Amendment charted a different course. They believed a society can be truly strong only when it is truly free.VI". If we allow those few in society to censor whatever they find offensive, we have forfeited our right of freedom and have lost our power as a democratic nation. I.) On Thursday Feb 1, 1996, Congress approved legislation to dramatically restrict the First Amendment rights of Internet users. President Clinton signed into law Thursday Feb. 8, 1996 II.) Miller v. California, 413 U.S. 15, 46 (1973), Justice Douglas, dissenting opinion. III.) The bill makes it illegal for email users to have addresses that do not include their own names. IV.) Bill Gates, Microsoft Magazine Volume 3 Issue 4 Page 54, TPD Publishing Inc., 1996 V.) The way in which computers read data. VI.) Ginzburg v. United States, 383 U.S. 463, 498 (1966) f:\12000 essays\technology & computers (295)\Clifford Stoll.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ By Clifford Stoll "The Cuckoo's Egg" is a story of persistence, love for one's work and is just plain funny! The story starts out with Clifford Stoll being "recycled" to a computer analyst/webmaster. Cliff, as he is affectionately called, is a long-haired ex-hippie that works at Lawrence Berkeley Lab. He originally was an astronomer, but since his grant wore out, he became a mainframe master. He was glad that instead of throwing him out into the unemployment office, the Lab recycled their people and downstairs he went, to the computer lab. A few days after he becomes the master of the mainframe, his colleague, Wayne Graves, asks him to figure out a 75cent glitch that is in the accounting system. It turns out that a computer guru, "Seventek" seems to be in town. None of his closest friends know that. The Lab becomes suspicious that it might be a hacker. To fill you in who Seventek is, he is a computer guru that created a number of programs for the Berkeley UNIX system. At the time, he was in England far from computers and civilization. The crew does not what to believe that it would be Seventek, so they start to look what the impostor is doing. Cliff hooks up a few computers to the line that comes from the Tymnet. Tymnet is a series of fiber-optic cables that run from a major city to another major city. So if you were in LA and wanted to hook up to a computer in the Big Apple you could call long distance, have a lot of interference from other callers and have a slow connection, or you could sign-up to Tymnet and dial locally, hop on the optic cable and cruise at a T-3 line. The lab had only five Tymnet lines so Cliff could easily monitor every one with five computers, teletypes, and five printers. That was the difficult part, where to get all that equipment. At graduate school they taught Cliff to improvise. It was a Friday, and not many people come to work on Saturday. Since it was easier to make up an excuse than to beg for anything, he "borrowed" everything he needed. Then programmed his computer to beep twice when someone logged on from the Tymnet lines. The thing is, since he was sleeping under his desk, he would gouge his head on the desk drawer. Also, many people like to check their E-mail very late at night, so not to get interference. Because of that his terminal beeped a lot! The next day, he was woken up by the cable operator. Cliff said that he must have smelled like a dying goat. Any way, the hacker only logged on once during the night, but left an 80 foot souvenir behind. Cliff estimated a two to three hours roaming through the three million dollar pieces of silicon that he calls a computer. During that time he planted a "Cuckoo's egg." The cuckoo is a bird that leaves its eggs in other bird's nest. If it not were for the other species ignorance, the cuckoo would die out. The same is for the mainframe. There is a housecleaning program that runs every five minutes on the Berkeley UNIX. It is called atrun. The hacker put his version of atrun into the computer through a hole in the Gnu-Emacs program. It is a program that lets the person who is sending E-mail put a file anywhere they wished. So that is how the hacker became a "Superuser." A superuser has all the privileges of a system operator, but from a different computer. Cliff called the FBI, the CIA, and all the other three lettered agencies that that had spooks in trench coat and dark glasses (and some of them had these nifty ear pieces too!) Everyone except the FBI lifted a finger. The FBI listened but, they stated that if they hadn't lost millions of dollars in equipment, or classified data, they didn't what to know them. The hodgepodge of information between the CIA, NSA, and Cliff began to worry his lover, Martha. A little background on her. She and Clifford have know each other since they were kids, and lovers since they turned adults. They didn't feel like getting married because they thought that was a thing that you do when you're very bland. They wanted freedom. If they ever wanted to leave they would just pack their bags, pay their share of the utilities and hightail it out of there. Well back to the plot. She too was an ex-hippie and she hated anything that had to do with government. The spook calls were killing their relationship. When Cliff wanted to trace a phone call to the hacker, the police said. "That just isn't our bailiwick." It seemed that everyone wanted information, wanted Cliff to say open with his monitoring system, but nobody seemed interested in paying for the things that were happening. When Cliff found the hacker in a supposedly secure system, he called the system administrator. The hacker was using the computer in their system to dial anywhere he wished, and they picked up the tab. The guy was NOT happy. He asked if he was to close up shop for the hacker and change all the passwords. Cliff answered no, he wanted to track the guy/gal. First Cliff strategically master minded a contrivance. He would ask for the secure system's phone records, which would show him (theoretically) where the hacker is calling to. Then that night, Cliff became the hacker. He used his computer to log in to his account at Berkley and then he would Telnet to the hacked system, try the passwords and see what he could see. Boy was he ever surprised! He could call anywhere, for free!! He had access to other computer on the network also, one sensitive at that. The next day, Cliff called the sys administrator, and told him about his little excursion. The guy answered. "Sorry Cliff, we have to close up shop. This went right up the line, and well, the modems are going down for a long time." This irritated Clifford. He was so close! Anyway his life went back to semi-normal. (Was it ever?!) Then unexpectedly his beeper beeped. To fill you in, he got him self a beeper for those unexpected pleasures. He was in the middle of making scrambled eggs for Martha, who was still asleep. He wrote her a note saying "The case is afoot!!J", leaving the eggs still in the pan. The hacker didn't come through the now secure system, but through another line, over Tymnet. He called Tymnet and got them to do a search. They traced him over the "puddle" (the Atlantic) to the German Datex Network. They couldn't trace any further because the German's network is all switches, not like the computerized switches of the good ol' US of A! There would have to be a technician, there tracing the wire along the wall, into the ground, and maybe on to a telephone pole. Not only that, the Germans wouldn't do anything without a search warrant. Every minor discovery was told about six times to the different three letter agencies that were on the case. Mean while, since this was no longer a domestic case, and was remotely interesting for the FBI, they took the case, out of pure boredom. The CIA affectionately called the FBI the "F entry". Now that the guys at the F entry were in, there was work to be done. They got a warrant, but the guy who was to deliver, never did. This was beginning to be serious. Every time Cliff tried to get some info on what is going on across the puddle, the agencies clamed up. When the warrant finally came, the Germans let the technicians be there to midnight German time. As soon as the fiend on the other side raised his periscope, they would nail him. The problem was, to trace him, well, he needed to be on the line for about two hours! The kicker is that he was on for mostly two to three minute intervals. That is when Operation Showerhead came into effect!! Martha came up with this plan while in the shower with Cliff...First make up some cheesy files that sound remotely interesting. Then place them in a spot that only he and the hacker could read. Recall that the hacker was after military files. They take files that were all ready there, change all the Mr. to General, all the Ms to corporal and all the Professors to Sergeant Major. All that day they made up those files. Then they pondered what the title should be, STING or SDINET. They chose SDINET because STING looked too obvious. Then they created a bogus secretary, under the address of a real one. Cliff put enough files on the directory so that it would take the hacker at least three hours of dumping the whole file onto his computer. In one of the files it said that if you wanted more info, send to this address. Well one day, Cliff was actually doing some work, for a change, when the real secretary called to say that a letter came for the bogus secretary. Cliff ran up the stairs, the elevator was too slow. They opened it and she read it aloud to Cliff who was in utter amassment. Then he called the F entry. They told him not to touch the document and to send to them in a special envelope. He did. Cliff was at home one day and all of a sudden his beeper beeped. Since he programmed it to beep in Morse code, he knew where the hacker was coming from before he physically saw him on the screen. Martha groaned while Clifford jumped on his old ten speed and rode to work. When he got there, the hacker just started to download the SDINET files from the UNIX. He called Tymnet and started the ball rolling. That day the hacker was on for more than two hours, enough for the trace to be completed. Though he knew that the FBI knew the number, they wouldn't tell him who the predator was. For the next few days, Clifford expected to get a call from the Germans saying, "You can close up your system, we have him at the police station now." That didn't happen. He got word, though, that there was a search of his home, and they recovered printouts, computer back-up tapes, and disks, and diskettes. That was enough evidence to lock him up for a few years. Then one day, they caught him in the act. That was enough, he was in the slammer awaiting trail. Clifford's adventure was over, he caught his hacker, and was engaged to Martha. They decided to get married after all. He returned to being an astronomer, and not a computer wizard. Though many people though of him as a wizard, he himself though that what he did was a discovery that he stumbled on. From a 75cent accounting mishap to Tymnet to Virginia, to Germany. What a trace! At the end of the story, poor Cliff was sobbing because he grew up!! L To him that was a disaster, but the wedding coming up, and his life officially beginning, he forgot it soon. Now he lives in Cambridge with his wife, Martha, and three cats that he pretends to dislike. f:\12000 essays\technology & computers (295)\CMIP vs SNMP Network Management Protocols.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CMIP vs. SNMP : Network Management Imagine yourself as a network administrator, responsible for a 2000 user network. This network reaches from California to New York, and some branches over seas. In this situation, anything can, and usually does go wrong, but it would be your job as a system administrator to resolve the problem with it arises as quickly as possible. The last thing you would want is for your boss to call you up, asking why you haven't done anything to fix the 2 major systems that have been down for several hours. How do you explain to him that you didn't even know about it? Would you even want to tell him that? So now, picture yourself in the same situation, only this time, you were using a network monitoring program. Sitting in front of a large screen displaying a map of the world, leaning back gently in your chair. A gentle warning tone sounds, and looking at your display, you see that California is now glowing a soft red in color, in place of the green glow just moments before. You select the state of California, and it zooms in for a closer look. You see a network diagram overview of all the computers your company has within California. Two systems are flashing, with an X on top of them indicating that they are experiencing problems. Tagging the two systems, you press enter, and with a flash, the screen displays all the statitics of the two systems, including anything they might have in common causing the problem. Seeing that both systems are linked to the same card of a network switch, you pick up the phone and give that branch office a call, notifying them not only that they have a problem, but how to fix it as well. Early in the days of computers, a central computer (called a mainframe) was connected to a bunch of dumb terminals using a standard copper wire. Not much thought was put into how this was done because there was only one way to do it: they were either connected, or they weren't. Figure 1 shows a diagram of these early systems. If something went wrong with this type of system, it was fairly easy to troubleshoot, the blame almost always fell on the mainframe system. Shortly after the introduction of Personal Computers (PC), came Local Area Networks (LANS), forever changing the way in which we look at networked systems. LANS originally consisted of just PC's connected into groups of computers, but soon after, there came a need to connect those individual LANS together forming what is known as a Wide Area Network, or WAN, the result was a complex connection of computers joined together using various types of interfaces and protocols. Figure 2 shows a modern day WAN. Last year, a survey of Fortune 500 companies showed that 15% of their total computer budget, 1.6 Million dollars, was spent on network management (Rose, 115). Because of this, much attention has focused on two families of network management protocols: The Simple Network Management Protocol (SNMP), which comes from a de facto standards based background of TCP/IP communication, and the Common Management Information Protocol (CMIP), which derives from a de jure standards-based background associated with the Open Systems Interconnection (OSI) (Fisher, 183). In this report I will cover advantages and disadvantages of both Common Management Information Protocol (CMIP) and Simple Network Management Protocol (SNMP)., as well as discuss a new protocol for the future. I will also give some good reasons supporting why I believe that SNMP is a protocol that all network administrators should use. SNMP is a protocol that enables a management station to configure, monitor, and receive trap (alarm) messages from network devices. (Feit, 12). It is formally specified in a series of related Request for Comment (RFC) documents, listed here. RFC 1089 - SNMP over Ethernet RFC 1140 - IAB Official Protocol Standards RFC 1147 - Tools for Monitoring and Debugging TCP/IP Internets and Interconnected Devices [superceded by RFC 1470] RFC 1155 - Structure and Identification of Management Information for TCP/IP based internets. RFC 1156 - Management Information Base Network Management of TCP/IP based internets RFC 1157 - A Simple Network Management Protocol RFC 1158 - Management Information Base Network Management of TCP/IP based internets: MIB-II RFC 1161 - SNMP over OSI RFC 1212 - Concise MIB Definitions RFC 1213 - Management Information Base for Network Management of TCP/IP-based internets: MIB-II RFC 1215 - A Convention for Defining Traps for use with the SNMP RFC 1298 - SNMP over IPX (SNMP, Part 1 of 2, I.1.) The first protocol developed was the Simple Network Management Protocol (SNMP). It was commonly considered to be a quickly designed "band-aid" solution to internetwork management difficulties while other, larger and better protocols were being designed. (Miller, 46). However, no better choice became available, and SNMP soon became the network management protocol of choice. It works very simply (as the name suggests): it exchanges network packets through messages (known as protocol data units (PDU)). The PDU contains variables that have both titles and values. There are five types of PDU's which SNMP uses to monitor a network: two deal with reading terminal data, two with setting terminal data, and one called the trap, used for monitoring network events, such as terminal start-ups or shut-downs. By far the largest advantage of SNMP over CMIP is that its design is simple, so it is as easy to use on a small network as well as on a large one, with ease of setup, and lack of stress on system resources. Also, the simple design makes it simple for the user to program system variables that they would like to monitor. Another major advantage to SNMP is that is in wide use today around the world. Because of it's development during a time when no other protocol of this type existed, it became very popular, and is a built in protocol supported by most major vendors of networking hardware, such as hubs, bridges, and routers, as well as majoring operating systems. It has even been put to use inside the Coca-Cola machines at Stanford University, in Palo Alto, California (Borsook, 48). Because of SNMP's smaller size, it has even been implemented in such devices as toasters, compact disc players, and battery-operated barking dogs. In the 1990 Interop show, John Romkey, vice president of engineering for Epilogue, demonstrated that through an SNMP program running on a PC, you could control a standard toaster through a network (Miller, 57). SNMP is by no means a perfect network manager. But because of it's simple design, these flaws can be fixed. The first problem realized by most companies is that there are some rather large security problems related with SNMP. Any decent hacker can easily access SNMP information, giving them any information about the network, and also the ability to potentially shut down systems on the network. The latest version of SNMP, called SNMPv2, has added some security measures that were left out of SNMP, to combat the 3 largest problems plaguing SNMP: Privacy of Data (to prevent intruders from gaining access to information carried along the network), authentication (to prevent intruders from sending false data across the network), and access control (which restricts access of particular variables to certain users, thus removing the possibility of a user accidentally crashing the network). (Stallings, 213) The largest problem with SNMP, ironically enough, is the same thing that made it great; it's simple design. Because it is so simple, the information it deals with is neither detailed, nor well organized enough to deal with the growing networks of the 1990's. This is mainly due to the quick creation of SNMP, because it was never designed to be the network management protocol of the 1990's. Like the previous flaw, this one too has been corrected with the new version, SNMPv2. This new version allows for more in-detail specification of variables, including the use of the table data structure for easier data retrieval. Also added are two new PDU's that are used to manipulate the tabled objects. In fact, so many new features have been added that the formal specifications for SNMP have expanded from 36 pages (with v1) to 416 pages with SNMPv2. (Stallings, 153) Some people might say that SNMPv2 has lost the simplicity, but the truth is that the changes were necessary, and could not have been avoided. A management station relies on the agent at a device to retrieve or update the information at the device. The information is viewed as a logical database, called a Management Information Base, or MIB. MIB modules describe MIB variables for a large variety of device types, computer hardware, and software components. The original MIB for Managing a TCP/IP internet (now called MIB-I) was defined in RFC 1066 in August of 1988. It was updated in RFC 1156 in May of 1990. The MIB-II version published in RFC 1213 in May of 1991, contained some improvements, and has proved that it can do a good job of meeting basic TCP/IP management needs. MIB-II added many useful variables missing from MIB-I (Feit, 85). MIB files are common variables used not only by SNMP, but CMIP as well. In the late 1980's a project began, funded by governments, and large corporations. Common Management Information Protocol (CMIP) was born. Many thought that because of it's nearly infinite development budget, that it would quickly become in widespread use, and overthrow SNMP from it's throne. Unfortunately, problems with its implementation have delayed its use, and it is now only available in limited form from developers themselves. (SNMP, Part 2 of 2, III.40.) CMIP was designed to be better than SNMP in every way by repairing all flaws, and expanding on what was good about it, making it a bigger and more detailed network manager. It's design is similar to SNMP, where PDU's are used as variables to monitor the network. CMIP however contains 11 types of PDU's (compared to SNMP's 5). In CMIP, the variables are seen as very complex and sophisticated data structures with three attributes. These include: 1) Variable attributes: which represent the variables characteristics (its data type, whether it is writable) 2) variable behaviors: what actions of that variable can be triggered. 3) Notifications: the variable generates an event report whenever a specified event occurs (eg. A terminal shutdown would cause a variable notification event) (Comer, 82) As a comparison, SNMP only employs variable properties from one and three above. The biggest feature of the CMIP protocol is that its variables not only relay information to and from the terminal (as in SNMP) , but they can also be used to perform tasks that would be impossible under SNMP. For instance, if a terminal on a network cannot reach the fileserver a pre-determined amount of times, then CMIP can notify appropriate personnel of the event. With SNMP however, a user would have to specifically tell it to keep track of unsuccessful attempts to reach the server, and then what to do when that variable reaches a limit. CMIP therefore results in a more efficient management system, and less work is required from the user to keep updated on the status of the network. CMIP also contains the security measures left out by SNMP. Because of the large development budget, when it becomes available, CMIP will be widely used by the government, and the corporations that funded it. After reading the above paragraph, you might wonder why, if CMIP is this wonderful, is it not being used already? (after all, it had been in development for nearly 10 years) The answer is that possibly CMIP's only major disadvantage, is enough in my opinion to render it useless. CMIP requires about ten times the system resources that are needed for SNMP. In other words, very few systems in the world would able to handle a full implementation on CMIP without undergoing massive network modifications. This disadvantage has no inexpensive fix to it. For that reason, many believe CMIP is doomed to fail. The other flaw in CMIP is that it is very difficult to program. Its complex nature requires so many different variables that only a few skilled programmers are able to use it to it's full potential. Considering the above information, one can see that both management systems have their advantages and disadvantages. However the deciding factor between the two, lies with their implementation, for now, it is almost impossible to find a system with the necessary resources to support the CMIP model, even though it is superior to SNMP (v1 and v2) in both design and operation. Many people believe that the growing power of modern systems will soon fit well with CMIP model, and might result in it's widespread use, but I believe by the time that day comes, SNMP could very well have adapted itself to become what CMIP currently offers, and more. As we've seen with other products, once a technology achieves critical mass, and a substantial installed base, it's quite difficult to convince users to rip it out and start fresh with an new and unproven technology (Borsook, 48). It is then recommend that SNMP be used in a situation where minimial security is needed, and SNMPv2 be used where security is a high priority. Works Cited Borsook, Paulina. "SNMP tools evolving to meet critical LAN needs." Infoworld June 1, 1992: 48-49. Comer, Douglas E. Internetworking with TCP/IP New York: Prentice-Hall, Inc., 1991. Dryden, Partick. "Another view for SNMP." Computerworld December 11, 1995: 12. Feit, Dr. Sidnie. SNMP. New York: McGraw-Hill Inc., 1995. Fisher, Sharon. "Dueling Protocols." Byte March 1991: 183-190. Horwitt, Elisabeth. "SNMP holds steady as network standard." Computerworld June 1, 1992: 53-54. Leon, Mark. "Advent creates Java tools for SNMP apps." Infoworld March 25, 1996: 8. Marshall, Rose. The Simple Book. New Jersey: Prentice Hall, 1994. Miller, Mark A., P.E. Managing Internetworks with SNMP New York: M&T Books, 1993. Moore, Steve. "Committee takes another look at SNMP." Computerworld January 16, 1995: 58. Moore, Steve. "Users weigh benefits of DMI, SNMP." Computerworld July, 31 1995: 60. The SNMP Workshop & Panther Digital Corporation. SNMP FAQ Part 1 of 2. Danbury, CT: http://www.www.cis.ohio-state.edu/hypertext/faq/usenet/snmp- faq/part1/faq.html, pantherdig@delphi.com. The SNMP Workshop & Panther Digital Corporation. SNMP FAQ Part 2 of 2. Danbury, CT: http://www.www.cis.ohio-state.edu/hypertext/faq/usenet/snmp- faq/part2/faq.html, pantherdig@delphi.com. Stallings, William. SNMP, SNMPv2, and CMIP. Don Mills, Addison-Wesley, 1993. Vallillee, Tyler, web page author. Http://www.undergrad.math. uwaterloo.ca/~tkvallil/snmp.html VanderSluis, Kurt. "SNMP: Not so simple." MacUser October 1992: 237-240 12 f:\12000 essays\technology & computers (295)\Cognitive Artifacts and Windows 95.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The article on Cognitive Artifacts by David A. Norman deals with the theories and principles of artifacts as they relate to the user during execution and completion of tasks. These principles and theories that Norman speaks about may be applied to any graphical user interface, however I have chosen to relate the article to the interface known as Windows 95. Within Windows 95, Microsoft has included a little tool called the wizard that guides us through the steps involved in setting up certain applications. This wizard is a very helpful tool to the non experienced computer user, in the way that it acts like a to-do list. The wizard takes a complex task and breaks it into discrete pieces by asking questions and responding to those questions based on the answers. Using Norman's theories on system view and the personal view of artifacts, we see that the system views the wizard as an enhancement. For example, we wanted to set up the Internet explorer, you click on the icon answer the wizard's questions and the computer performs the work. Making sure everything is setup properly without the errors that could occur in configuring the task yourself. The wizard performs all the functions on its little to-do list without having the user worrying about whether he/she remembered to include all the commands. On the side of personal views the user may see the wizard as a new task to learn but in general it is simpler than having to configure the application yourself and making an error, that could cause disaster to your system. The wizard also prevents the user from having to deal with all the internal representation of the application like typing in command lines in the system editor. Within Windows 95 most of the representation is internal therefore we need a way to transform it to surface representation so it is accessible to the user. According to Norman's article there are "three essential ingredients in representational systems. These being the world which is to be represented, the set of symbols representing the world, and an interpreter." This is done in Windows by icons on the desktop and on the start menu. The world we are trying to represent to the user is the application, which can be represented by a symbol which is the icon. These icons on the desktop and on the start menu are the surface representations the user sees when he goes to access the application not all the files used to create it or used in conjunction with the applications operation. With the icons a user can retrieve applications and their files by a click of a button. The icons lead the user directly into the application without showing all the commands the computer goes through to open the application. The icons make the user more efficient in accomplishing tasks because it cuts done on the time of trying to find an item when the user can relate what he/she wants to do by the symbol on the icon. Another example of an artifact within Windows 95 that exhibits Norman's theories is the recycle bin. This requires the user to have a direct engagement with the windows explorer and knowing the right item to delete. As a user decides that he no longer desires a certain program and chooses to delete the item, he is executing a command that will change the perception of the system. By selecting the item to delete the user has started an activity flow which involves the gulf of evaluation and the gulf of execution. Either of these gulfs could be perceived differently by the user then by the system so Windows 95 prompts the user with a dialog box asking if the user is sure he/she wants to remove this item from the system and it prompts again when emptying the recycle bin. What the user intends to do and what the system plans to do might not be the same so by prompting the user for action we are double checking that this is what the user has in mind. However when windows prompts us with the confirmation message, we are breaking the scheduled activity flow. The main problem with halting the activity flow is that it breaks the user's attention, however when deleting an item you could have selected the wrong item by mistake and without the break in activity flow the outcome could be dangerous. Norman calls these breaks "forcing functions which prevent critical or dangerous actions from happening without conscious attention." The artifacts discussed above using Windows 95 graphical user interface are very similar to the theories and principles that Norman suggests in his article. Norman has stressed that cognitive artifact should follow three aspects which I feel Windows has dealt with. Windows 95 in itself has been made so that it is adaptable to the user whether he/she be an experienced user or not, by creating artifacts like icons and menu bars that are all related to one another. This makes it easier for the user to adapt to its environment and continue computing happily. f:\12000 essays\technology & computers (295)\Communication over the net.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Communication over the internet, and the effects it will have on our economy. Thesis: Communication over the internet is growing at a rapid rate, this rate of growth may destroy the monopoly taking place with the major telecommunication giants. In this day and age we as a global community are growing at a super fast rate. Communication is a vital tool which aids us in breaking the distance barrier. Over the past decades there has been a monopoly in the telecommunications business, but now with the power of the internet, and super fast data transfer rates people can communicate across the globe and only pay local rates. · In essence the local phone companies almost promote this. - When you log on to the internet chances are that you are logging on through a local internet provider. You will use your computer modem to dial up and create and data link with your net provider. Where does the net provider get his super fast net connection from? He gets the connection from the local phone company. · How logging on the internet is almost like logging right onto the local telephone company. -It all boils down to, the local phone company approving the use of the internet for any means. · How phone companies are going to bring them selves down. -I feel that because of this phone companies will be the cause to their own downfall. · Methods of communication over the net -There are many ways of communicating over the net: Inter relay chat (text only) -Video/Audio: there are many net applications which allow the user to simply plug in a mini video camera( which can be purchased anywhere from $150+) and speakers and a microphone and establish a net connection and be able to view to and hear. -There are also applications such as the internet phone which enables the user to talk with other people, this works almost like a conventional telephone. · New technologies and what to expect in the near future. There have been many new breakthroughs in communication s recently, we are unfolding new ideas and new and faster ways in communication. Fiber optic technology is probably the next major wave in technology. Fiber optic communication over the internet will mean that it will be a lot easier to communicate. · Why there is no jurisdiction over this means of communication. A major principal of law and order is to control a certain area and population. Laws that apply to one state or province don't necessarily apply to another. Just like in Amsterdam you can order a slice of hashish with your coffee and if you did the same in Singapore you would be executed. The internet does not reside somewhere nor is it a physical thing. The internet has no boundaries, there is no way in which we can control it. There is no one person liable for what happens on it, there is no board of control therefore nobody has any jurisdiction of what happens on the internet. This should be a major concern to large telecommunication companies. · Advantages/Disadvantages with the technology available to the normal person -There is a downside however to the communication on the internet, for example when talking on the internet phone, you can not talk both ways, one person says something the other waits until he/she is done, then the other person can respond. On the other hand if you have a problem of cutting people off then this would be a good solution for you. · How corporations other than the telecommunication companies will boom These new technologies will dramatically lower the cost of communication, not to mention advantages of online service. For example it is quite easy for a technician to log on to your machine and fix up any problems which may occur. · How the government gain/loose from this new technology. The government will gain money from the people as a whole because they reduce there costs enabling themselves to purchase desired goods which are taxable. · The aftershocks of the effects on the economy i.e. decrease unemployment. People might argue that there will be major job cuts due to new technologies, but what do the telephone companies plan to do any ways in the next five to ten years, they are all looking at technology as well to reduce there costs. Costs such as man labour. The new technology will also create jobs for graduating students from Universities, there will be a large demand for programming skills, computer oriented network managers, system operators etc.. Technology is a tool in which we will improve your quality of life, it will aid us in making life easier so that we can enjoy it to the fullest. Communication over the internet will help a lot from sending faxes, to chatting with someone from Australia, to video conferencing with a boss. Communication over the internet will effect our economy. f:\12000 essays\technology & computers (295)\Communications Decency Act.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Communication Decency Act (The Fight For Freedom of Speech on the Internet) The Communication Decency Act is a bill which has insulted our right as American citizens. It a bill which SHOULD not pass. I'll share with you how Internet users are reacting to this bill, and why they say it is unconstitutional. Some individuals disagree with one part of the bill. According to http://thomas.loc.gov/cgi-bin/ query/z?c104:s.652.enr:, which has the Communications Decency Act on-line for public viewing,: "Whoever uses an Internet service to send to a person or persons under 18 years of age......any comment, request, suggestion, proposal, image,........or anything offensive as measured by contemporary community standards, sexual or excretory activities or organs.....shall be fined $250,000 if the person(s) is/are under 18....... imprisoned not more than two years.......or both." The wording of that section seems sensible. However, if this one little paragraph is approved, many sites such as the: Venus de Milo site located at: http://www.paris.org/Musees/Louvre/Treasures/gifs/venusdemilo.gif; the Sistine Chapel at: http://www.oir.ucf.edu/wm/paint/auth/michelangelo/michelangelo.creation and Michelangelo's David @ http://fileroom.aaup.uic.edu/FileRoom/images/image201.gif could not be accessed and used by anybody under the age of 18. These works of art and many other museum pictures would not be available. The bill says these sites show indecent pictures. The next part of the CDA has everybody in a big legal fit. We, concerned Internet users, took the writers of this bill to court, and we won. This part of the bill states: "Whoever....makes, creates, or solicits...........any comment, request, suggestion, proposal, image, or other communication which is obscene, lewd, lascivious, filthy, or indecent.......with intent to annoy, abuse, threaten, or harass another person......by means of an Internet page..........shall be fined $250,000 under title 18......imprisoned not more than two years....or both......" The writer of that paragraph of the bill forgot something. It violates the constitution. The First Amendment states: "Congress shall make no law....prohibiting or abridging the freedom of speech......the right of the people peaceably to assemble.....and to petition the Government.............." This bill does exactly that. It says we cannot express our feelings cleanly. I understand that what may be of interest to me, may be offensive to others. Many people put up warning signs on their websites stating, "This site may contain offensive material. If you are easily offended you may not want to come here." If the writers of this bill would have listed that as a requirement there would have been no trouble. Here is the way I look at it. I think that some things should be censored on the Internet. Child pornography, for instance, is already illegal, so it follows that it should also be illegal on the Internet. Besides, psychologically, it damages the children involved. Something else that should be banned from the Internet are "hacker" programs meant to harm other Internet users. Some examples of such programs are AOHell which can give you access to America On-line for free and E-mail Bomb, or otherwise harass others using the service (American On-line just passed a bill that gave them the right to allow users to let them scan their mail for such harmful things.) Another thing that could be banned are text files which describe how to complete illegal actions, such as make bombs. The most famous is the "Anarchist Cook Book," which shows Internet users some of the above problems. I also believe that the use of log-ins, passwords, and rating systems on pages for the Internet are a good idea, and are not violations of our civil rights. They simply allow the user to choose what they want to see. Some of these systems are already in use today, along with programs that watch for obscene and profane keywords, and links to pornographic sites. What have Internet users learned from the courts? After all was said and done, we have learned that passing unconstitutional laws like the CDA is not the exception but the rule these days in Washington, DC. Next, the people responsible for giving us the CDA are respectable Republicans and Democrats, not liberals and conservatives. If someone would have asked an Internet user who is opposed to the CDA to vote for Clinton or Dole this past fall, they would say, "Wouldn't that have been like being given a choice between cancer and heart disease?" In other words, disrespect for the President and Congress seem appropriate. Third, the White House recognizes that it is cheaper to pass this bill, by saying, this is the law. Live with it. Doing this would prove to me this country is run by politicians who do not care about the people, their rights, or the law. This bill, if passed, would only prove to me that all the government cares about is themselves and their money. A great president by the name of Abraham Lincoln once said, "This country was made for the people, and run by the people..." America can now only hope, for another man like Lincoln, to step up, and lead this country, bringing it back to what it used to be. Also, it is time to focus on the things we need to have in this country, like building a new society. After World War II and Vietnam, I believe it is the computer generation's destiny to rebuild our family and give community abilities to evolve, solve problems, generate and distribute wealth, promote peace, and personal security. Finally, freedom is struggle, by definition. Freedom on the Internet is not a gift. It's the space we ourselves own, in the face of the government and the media, who have seemingly tried to take that space away from us. CDA will also take away some sites such as: The Library of Congress Card Catalog, which some say contains "indecent" language. We will not be able to view such literature as Mark Twain's The Adventures of Huckleberry Finn and Nathaniel Hawthorne's The Scarlet Letter, because the CDA says those "classics" contain offensive material. The act also prevents any sites in existence which tell teens about safe sex and Sexually Transmitted Diseases. Most on-line newspapers such as USA TODAY, will have to be blackened out when the monitor's screen shows them articles about sex. "Ignorance is caused by stupidity!" That has become a familiar "battle" cry of Internet users. The goverment knows hardly nothing about the pride Internet users take in having their own "world." That is the stupidity part of it. The ingnorance is the politicians refusing to listen to us. They do not want to understand. Some ways you can help fight this terrible bill would be to march through Washington, DC on July 30, 1997. Many people have turned their web pages backgrounds black to show they are protesting. Some display blue ribbons to show an Internet users' displeasure with the CDA. Another way to show you care is to e-mail high political officers. I have e-mailed the current president (9:23 PM, 11-5-96) Bill Clinton and the vice-president Al Gore. I have also mailed Bob Dole and Jack Kemp. On the more local level I have e-mailed Senators: Rick Santorum and Arlen Specter and Representatives: Jon Fox, Paul Kanjorski, Paul McHale, John Murtha, Robert Walker, and Curt Weldon. I have mailed: Gov. Tom Ridge, Lt. Gov. Mark Schweiker and Senators Roy Afflerbach, Gibson Armstrong, Clarence Bell, David Brightbill, J. Doyle Corman, Daniel Delp, Vincent Fumo, Jim Gerlach, Stewart Greenleaf, Melissa Hart, F. Joseph Loeper, Roger Madigan, Robert Mellow, Harold Mowery Jr., John Peterson, James Rhoades, Robert Robbins, Allyson Schwartz, Joseph Uliana, Noah Wenger, Rep. Lisa Boscola, Rep. Italo Cappabianca and Rep. Lawrence Curry have been contacted by myself as well. I have e-mailed Happy Fernandez, a Philadelphia City Councilwoman. The message I sent them is a smaller version of this one: "To whom it may concern, I am writing to you about the Communications Decency Act. I believe the act is unconstitutional. Amendment I states: "Congress shall make no law......abridging the freedom of speech...." This alone should prohibit this act. The Communications Decency Act will force many educational Internet sites to close. I, as a student, use the Information Super Highway for exactly that, information. It is very helpful to have updated facts and so forth. With the Communications Decency Act such sites as the Library of Congress Electronical Card Catalog would be kept away from me because of "indecent" titles. I use the word indecent in quotation marks because I feel it is being used improperly. Some other sites, will be closed because of nudity. Such sites as Michelangelo's David, because of the "nudity." There again I use quotations. Sites informing teenagers such as myself of the dangers of Unprotected Sex and AIDS, as well as other STD's will not be allowed to be shown. I know I may be taking this the wrong way, so I would appreciate response telling me why this act should pass. I hope you consider what I, and many others, have been saying. Thank you for your time, Ryne Crabb " Another huge part of this world-wide protest was the Electronic March on Washington, DC. People, of all ages, who care about the unconstitutionality of the CDA, went to the White House and made signs, etc. while marching around the White House's property. Also, everybody was asked to e-mail the president in protest. President Clinton got over 10,000 e-mail messages on that day. I think it opened a lot of eyes. Black Thursday was another big issue. Over 82% of the Internet's websites had a "blackout." "Yahoo!" the famous search engine also blackened all of their pages in protest. It was beautiful how many heads were turned. Major businesses such as AT&T and ESPN also did their part in this battle by making comments about it to less informed Internet users. Although there are other things happening in cyberspace, this issue remains a major problem. Chances are, however, when this piece of legal mess is settled, happily or not, another will come up. I can almost see what is next on the list. Some countries are taxing the Internet. Trust me, we do not even WANT to get into that, yet. I hope this opened your eyes as to the importance of this fight. We need to show the government this country still is made for the people, and run by the people. That is written in the constitution. We do not want to change the document our forefathers wrote expressing their wishes for our future generations. That document protects our freedoms. It is important that the constitution remains intact so that it can preserve all of our freedoms including use of the Internet as we see fit. f:\12000 essays\technology & computers (295)\Compaq Corporation.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Introduction The intention of this project is to demonstrate the function of production planning in a non - artificial environment. Through this simulation we are able to forecast, with a degree of certainty the monthly requirements for end products, subassemblies, parts and raw materials. We are supplied with information that we are to base our decisions on. The manufacturer depicted in this simulation was actually a General Electric facility that produced black and white television sets Syracuse, New York. Unfortunately this plant is no longer operational, it was closed down and the equipment was shipped off to China. One can only wonder if the plant manager would have taken Professor Moily's class in production management the plant still might be running. Modern production management or operation management (OM) systems first came to prominence in the early half of the twentieth century. Frederick W. Taylor is considered the father of operations management and is credited in the development of the following principles. a. Scientific laws govern how much a worker can produce in a day. b. It is the function of management to discover and use these laws in operation of productive systems. c. It is the function of the worker to carry out management's wishes without question. Many of today's method's of operation management have elements of the above stated principles. For example, part of Material Requirement Planning system (MRP) is learning how workers to hire, fire, or lay idle. This is because it we realize the a worker can only produce so many widgets a day, can work so many hours a day, and so many days a year. I will disagree with principle "c" in that the worker should blindly carry out the wishes of management. Successful operations are based upon a two-way flow of thought and suggestions from management to labor. This two-way flow of ideas is incorporated into another modern system of operations management, the Just - In - Time system. Eastman Kodak gives monetary rewards to employees who devises an improvement in a current process or suggests an entirely new process of manufacturing. Often a small suggestion can yield a big reward when applied to a mass-produced item. Body In this project we are presented with the following information: bounds for pricing decisions, market share determination, the product explosion matrix, sales history (units per month at average price), unit value, setup man-hours, running man hours, initial workforce, value of inventory, on hand units. We also know that we have eight end products, four subassemblies, eight parts, and four raw materials. The eight end products are comprised entirely from the subassemblies, parts, and raw materials. From this information I was able to determine how many units of each final product, how many units of parts to produce in a month, how many units of raw material to order every month and how to price the final products. The first step that I took in this project was to develop product structures for each product (please refer to the Appendices for product structures on all eight products, plus new product nine). This information was presented in product explosion matrix. For example, I determined that product one used one subassembly nine and one part thirteen. Part thirteen consisted of raw material twenty-one. Sub-assembly nine consists of part thirteen (which includes raw material twenty-one), raw material twenty one and raw material twenty-four. From this product explosion matrix I have realized that an end product does not just happen; they consist of many subassemblies, parts and raw material. We also determined the minimal direct costs to each of the eight products. The minimal direct product is the cost of the raw material, plus the price of the amount of labor for the assembly to end product. For product one we have a total of three raw material "twenty-one" which cost ten dollars a piece and one raw material "twenty-four" which cost twenty dollars each. We now have a total of fifty dollars for the price of the parts. Next we calculate the labor that goes into transforming these parts into a viable end product. We get a total of six hours of running man hours/unit and an hourly labor rate of $8.50, which gives us a total of fifty-one dollars. This gives a minimal total cost of $101 to produce product one. This number is useful in determining how much a unit actually cost to manufacture and what we must minimally sell the product for to make a profit. We can than analyze if a product costs to much to make or the sum of the parts is more than the price of the end product. Product eight had the lowest direct minimum cost ($89.50) and four had the highest minimal direct cost. From a purely economic stand point, it would be beneficial to use as much of raw material twenty-three ($5 unit) and as little of raw material twenty-two ($30 unit). This does not consider that raw material twenty-two may actually be more valuable than raw material twenty-three. Perhaps raw material twenty two may be gold or silver and raw material twenty-three may be sand or glass. I also converted all information in the sales history per month (figure four of the MANMAN packet). The purpose of this step was so that I could sort and add the sales numbers to chronicle the past twenty four months. Clearly product one was the best-selling apparatus, and product three, four and five where sales laggards. Entering the information into spreadsheet form was also necessary to present the eight products in graphical form. Of the following graph types that where at my disposal (line, bar, circle) to clearly illustrate the upward and downward trend of each of the eight product I chose the line graph method. A circle graph is good percentage comparisons or comparison of market share. Bar graphs can illustrate a snapshot in time but can distort trend data. At this point our class gathered into groups to discuss which product to discontinue. Obviously product one was not going to be of the discontinued products, since it was our volume leader. Based on the sales figure for the past twenty-four months my group decided to eliminate products three, four and five. Also, products three, four and five had the highest minimum direct costs as well. Since these products where expensive to manufacture and where our lowest selling products a group decision was made to discontinue these products. The discontinued product was then rolled over into a new product, now referred to product nine. Unfortunately, we where unable to decide by the information given if any of the discontinued products was a high margin product, low volume product (IE 50" big screen color Trinitron tube with oak cabinet and stereo sound). Moving right into our next step we began to analyze our bar charts to make our starting forecast. We viewed sales from each product to see if they fall under one the following situations: Base (Base + Trend) (Base + Trend) * Seasonality When a product is base the sales alter little each sales period or change erratically with external market signals. An example of a product that would fall under the base model would be sand bags. Sand bags sell at the same level month after month. If a retailer sells a hundred bags in March the will sell a hundred bags in October. But, in a flood plain after terrantiel downpour, the sales of sandbags increase exponentially. This is because many people purchase the sandbags to hold back the rising flood waters. Another example of a product that would emulate the base model is insulin. There is a limited number of people with insulin dependant diabetes. The people with insulin dependant diabetes unfortunately die off, but are replaced with other people who fall ill to the disease. There is very little movement up or down in the sale of insulin. The base plus trend model illustrates that a product has a trend of upward or downward groth in sales. Products at the begining or ending of their respective product cycles will display this type of performance. Sales of a new product such as Microsoft Windows95(tm) disk operating system will fall into this category. The sales of May are expected to be larger than April, the sales of April will be larger than March and so on. While the sales may decline (or increase) during a particular time frame, a trend of upward or downward growth will be apparent. Lastly, the base plus trend times seasonality attempts to forecast the swings in demand that are caused by seasonal changes that can be expected to repeat themselves during a single or consecutive time period. For example, florists experience a predictable increases in demand each year, both occur at similiar (or exact) times during the year; Mothers Day and St. Valentines Day. Florists must forecast demand for roses and other flowers so they can meet this predictable demand. If I where to construct a ten year historical graph for a neighborhood florist, there would be clear increase in demand every February and May, in every one of those years. A caveat to the previous example would be that in most lines a business forecasting is never this easy. If it was there would not be a production management class or operations management science! Some other methods used to forecast demand are: delphi method, historical analogy, simple moving average, box Jenkins type, exponential smoothing and regression analysis. Forecasting falls into four basic types: qualitative, time series analysis, casual relationships and simulation. All of the proceeding have pluses, minuses and degrees of accuraccy. I often depends on the precision of previous data. Also, as is often stated in financial prospectuses "past performance does not guarantee future results". For product one I used base plus trend. The sales started of at 1246 units and gradually increased to 2146 at the end of twenty four months. There was a slight dip in sales between month nine-teen and month twenty three. This drop can from internal or external variables. Product two was little more tricky. The swing where eratic and showed no detectable trend. I may have been able to use (Base + Trend) * Seasonality if there was not a decrease in sales from month eight and an increase in sales in month sixteen. For this I had to employ the base or simle method. While I find it hard to comprehend how television sales can be seasonal, products three, five and six fall under (Base + Trend) * Seasonality models. I was able to replicate the wave in demand with my forecast. Perhaps consumers are buying portable televisions to use at the beach while on vacation, or people are replacing there old televisions to watch the Superbowl championship game or world series. Or maybe even watch the Syracuse Orangemen in the NCAA college basketball championship! Conceivably, I was reading to much into product six when a decided on base plus trend model. The way I saw it was that none of the upward or downward where that substantial when compared with entire data, and sales from month one (521 units) decreased by almost fifty percent to 242 units. I felt the same way about product eight that I felt about product two, this product demostrated eratic swings in no particular trend. I forecasted demand using the base or simlple method for this product. From this point I was able to forecast demand. For the safety stock decision I always tried to error on the side of caution. On average I used a twenty five percent safety stock level. However, when calculating the MRP or labor plans I tried to have the minimal amount of surplus. This often means that I only had safety stock on hand from period to period. Conclusion From this project and from the class lectures I have received an understanding of how how much planning goes into even the most simplest of manufactured goods. Production managers must employ at least one type of forecasting method in order to avoid the everyday stock outs, late deliveries and labor problems that arise. Forecasts are vital to every business organization and for every significant management decision. Afterthought I feel that I could have further reduced costs by reducing some of the parts, sub assemblies and outsourcing some of the production. Another situation that I felt was unrealistic was that there was only one source for each part and when that part was unvailable, there was a stock out. Perhaps in future projects there can be allowance for this. f:\12000 essays\technology & computers (295)\Comparing Motorola and Intel Math Coprocessors.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Floating Point Coprocessors The designer of any microprocessor would like to extend its instruction set almost infinitely but is limited by the quantity of silicon available (not to mention the problems of testability and complexity). Consequently, a real microprocessor represents a compromise between what is desirable and what is acceptable to the majority of the chip's users. For example, the 68020 microprocessor is not optimized for calculations that require a large volume of scientific (i.e. floating point) calculations. One method to significantly enhance the performance of such a microprocessor is to add a coprocessor. To increase the power of a microprocessor, it does not suffice to add a few more instructions to the instruction set, but it involves adding an auxiliary processor that works in parallel to the MPU (Micro Processing Unit). A system involving concurrently operating processors can be very complex, since there need to be dedicated communication paths between the processors, as well as software to divide the tasks among them. A practical multiprocessing system should be as simple as possible and require a minimum overhead in terms of both hardware and software. There are various techniques of arranging a coprocessor alongside a microprocessor. One technique is to provide the coprocessor with an instruction interpreter and program counter. Each instruction fetched from memory is examined by both the MPU and the coprocessor. If it is a MPU instruction, the MPU executes it; otherwise the coprocessor executes it. It can be seen that this solution is feasible, but by no means simple, as it would be difficult to keep the MPU and coprocessor in step. Another technique is to equip the microprocessor with a special bus to communicate with the external coprocessor. Whenever the microprocessor encounters an operation that requires the intervention of the coprocessor, the special bus provides a dedicated high-speed communication between the MPU and the coprocessor. Once again, this solution is not simple. There are more methods of connecting two (or more) concurrently operating processors, which will be covered in more detail during the specific discussions of the Intel and Motorola floating point coprocessors. Motorola Floating Point Coprocessor (FPC) 68882 The designers of the 68000-family coprocessors decided to implement coprocessors that could work with existing and future generations of microprocessors with minimal hardware and software overhead. The actual approach taken by the Motorola engineers was to tightly couple the coprocessor to the host microprocessor and to treat the coprocessor as a memory-mapped peripheral lying inside the CPU address space. In effect, the MPU fetches instructions from memory, and, if an instruction is a coprocessor instruction, the MPU passes it to the coprocessor by means of the MPU's asynchronous data transfer bus. By adopting this approach, the coprocessor does not have to fetch or interpret instructions itself. Thus if the coprocessor requires data from memory, the MPU must fetch it. There are advantages and disadvantages to this design. Most notably, the coprocessor does not have to deal with, for example, bus errors, as all fetching is performed by the host MPU. On the other hand, the FPC can not act as a bus master (making it a non-DMA device), making memory accesses by the FPC slower than if it were directly connected to the address and data bus. In order for the coprocessor to work as a memory mapped device, the designers of the 68000 series of MPU's had to set aside certain bit patterns to represent opcodes for the FPC. In the case of the 68000's, the FPC is accessed through the opcode 1111(2). This number is the same as 'F' in hexadecimal notation, so this bit pattern is often referred to as the F-line. Interface The 68882 FPC employs an entirely conventional asynchronous bus interface like all 68000 class devices, and absolutely no new signals whatsoever are required to connect the unit to an MC 68020 MPU. The 68882 can be configured to run under a variety of different circumstances, including various sized data buses and clock speeds. What follows is a diagram of connections necessary to connect the 68882 to a 68020 or 68030 MPU using a 32-bit data path. As mentioned previously, all instructions for the FPC are of the F-line format, that is, they begin with the bit pattern 1111(2). A generic coprocessor instruction has the following format: the first four bits must be 1111. This identifies the instruction as being for the coprocessor. The next three bits identify the coprocessor type, followed by three bits representing the instruction type. The meaning of the remaining bits varies depending on the specific instruction. Coprocessor Operation When the MPU detects an F-line instruction, it writes the instruction into the coprocessors memory mapped command register in CPU space. Having sent a command to the coprocessor, the host processor reads the reply from the coprocessor's response register. The response could, for example, instruct the processor to fetch data from memory. Once the host processor has complied with the demands from the coprocessor, it is free to continue with instruction processing, that is, both the processor and coprocessor act concurrently. This is why system speed can be dramatically improved upon installation of a coprocessor. MC 68882 Specifics The MC 68882 floating point coprocessor is basically a very simple device, though it's data manual is nearly as thick as that of the MC 68000. This complexity is due to the IEEE floating point arithmetic standards rather than the nature of the FPC. The 68882 contains eight 80-bit floating point data registers, FP0 to FP7, one 32-bit control register, FPCR, and one 32-bit status register, FPSR. Because the FPC is memory mapped in CPU space, these registers are directly accessible to the programmer within the register space of the host MPU. In addition to the standard byte, word and longword operations, the FPC supports four new operand sizes: single precision real (.S), double precision real (.D), extended precision real (.X) and packed decimal string (.P). All on-chip calculations take place in extended precision format and all floating point registers hold extended precision values. The single real and double real formats are used to input and output operands. All three real floating point formats comply with the corresponding IEEE floating point number standards. The FPC has built in functions to convert between the various data formats added by the unit, for example a register move with specified operand type (.P, .B, etc). The 68882 FPC has a significant instruction set designed to satisfy many number-crunching situations. All instructions native to the FPC start with the bit pattern 1111(2) to show that the instruction deals with floating point numbers. Some instructions supported by the FPC include FCOSH, FETOX, FLOG2, FTENTOX, FADD, FMUL and FSQRT. There are many more instructions available, but this excerpt demonstrates the versatility of the 68882 unit. One of the registers within the FPC is the status register. It is very similar in function to the status register in a CPU; it is updated to show the outcome of the most recently executed instruction. Flags within the status register of the FPC include divide by zero, infinity, zero, overflow, underflow and not a number. Some of the conditions signaled by the status register of the FPC (for example divide by zero) require an exception routine to be executed, so that the user is informed of the situation. These exceptions are stored and executed within the host MPU, which means that the FPC can be used to control loops and tests within user programs - further extending the functionality of the coprocessor. Intel Math Coprocessor 80387 DX In many respects, the Intel 80387 math coprocessor (MCP) is very similar to the MC 68882. Both designs were influenced by such factors as cost, usability and performance. There are, however, subtle differences in the designs of the two units. Firstly, I shall discuss the similarities between the designs followed by differences. Like the 68882, the 80387 requires no additional hardware to be connected to a 80386. It is a non-DMA device, having no direct access to the address bus of the motherboard. All memory and I/O is handled by the CPU, which upon detection of a MCP instruction passes it along to the MCP. If additional memory reads are necessary to load operands or data, the MCP instructs the CPU to perform these actions. This design, although reducing MCP performance when compared to a direct connection to the address bus, significantly decreases complexity of the MCP as no separate address decoding or error handling logic is necessary. The connection between the CPU and the MCP instruction is via a synchronous bus, while internal operation of the MCP can run asynchronously (higher clockspeed). Moreover, the three functional units of the MCP can work in parallel to increase system performance. The CPU can be transferring commands and data to the MCP bus control logic while the MCP floating unit is executing the current instruction. Similar to the 68882, the 80387 has a bit pattern (11011(2)) reserved to identify instructions intended for it. Also, the registers of the MCP are memory mapped into CPU address space, making the internal registers of the MCP available to programmers. Internally, the 80387 contains three distinct units: the bus control logic (BCL), the data interface and control unit and the actual floating point unit. The data interface and control unit directs the data to the instruction decoder. The instruction decoder decodes the ESC instructions sent to it by the CPU and generates controls that direct the data flow in the instruction buffer. It also triggers the microinstruction sequencer that controls execution of each instruction. If the ESC instruction is FINIT, FCLEX, FSTSW, FSTSW AX, or FSTCW, the control unit executes it independently of the FPU and the sequencer. The data interface and control unit is the unit that generates the BUSY?, PEREQ and ERROR? signals that synchronize Intel 387 DX MCP activities with the Intel 80386 DX CPU. It also supports the FPU in all operations that it cannot perform alone (e.g. exceptions handling, transcendental operations, etc.). The FPU executes all instructions that involve the register stack, including arithmetic, logical, transcendental, constant, and data transfer instructions. The data path in the FPU is 84 bits wide (68 significant bits, 15 exponent bits, and a sign bit) which allows internal operand transfers to be performed at very high speeds. Interface The MCP is connected to the MPU via a synchronous connection, while the numeric core can operate at a different clock speed, making it asynchronous. The following diagram will clarify this. The following diagram shows the specific connections necessary between the 80386 MPU and the 80387 MCP. A typical coprocessor instruction must begin with the bit pattern 11011(2) to identify the instruction for the coprocessor. The bus control logic of the MCP (BCL) communicates solely with the CPU using I/O bus cycles. The BCL appears to the CPU as a special peripheral device. It is special in one important respect: the CPU uses reserved I/O addresses to communicate with the BCL. The BCL does not communicate directly with memory. The CPU performs all memory access, transferring input operands from memory to the MCP and transferring outputs from the MCP to memory. Coprocessor Operation When the CPU detects the arrival of a coprocessor instruction, it writes the instruction into the coprocessors memory mapped command register in CPU space. Having sent a command to the coprocessor, the host processor reads the reply from the coprocessor's signals. The response could, for example, instruct the processor to fetch data from memory. Once the host processor has complied with the demands from the coprocessor, it is free to continue with instruction processing, that is, both the processor and coprocessor act concurrently. This is why system speed can be dramatically improved upon installation of a coprocessor. 80387 Specifics Just like the MC 68882 floating point coprocessor, the Intel 80387 is basically a very simple device. Like any reasonable math coprocessor, it conforms to the IEEE standards of floating point number representations. The 80387 contains eight 82-bit floating point data registers (including a 2- bit tag field), R0 to R7, one 16-bit control register, one 16-bit status register and a tag word (that contains the tag fields for the eight data registers). The MCP also indirectly uses the 48-bit instruction and data pointer registers of the 80386 host processor, even though these are external to the unit. Because the FPC is memory mapped in CPU space, these registers are directly accessible to the programmer within the register space of the host MPU. In addition to the standard word, short and long (16, 32 and 64-bit) integer operations, the MCP supports four new operand sizes: single precision real, double precision real, extended precision real and packed binary coded decimal strings. All on-chip calculations take place in extended precision format and all floating point registers hold extended precision values. The single real and double real formats are used to input and output operands. All three real floating point formats comply with the corresponding IEEE floating point number standards. The MCP has built in functions to convert between the various data formats added by the unit. The 80387 has a significant instruction set designed to satisfy many number-crunching situations. All instructions native to the MCP start with the bit pattern 11011(2) to show that the instruction should be directed to the coprocessor. Some (of the over 70) instructions supported by the MCP are FCOMP, FDIV, FSQRT, FSINCOS, FINIT. There are many more instructions available, but this excerpt demonstrates the versatility of the 80387 unit, which is very similar to that of the 68882 unit. One of the registers within the MCP is the status register. Just like for the 68882, the status register shows the outcome of the most recently executed instruction. Flags within the status register of the FPC include divide by zero, infinity, zero, overflow, underflow and invalid operation. Some of the conditions signaled by the status register of the FPC (for example divide by zero) require an exception routine to be executed by the host MPU, so that the user is informed of the situation. These exceptions are stored and executed within the host MPU, which means that the MCP can again be used to control loops and tests within user programs - further extending the functionality of the coprocessor. The Intel 80387 DX MCP register set can be accessed either as a stack, with instructions operating on the top one or two stack elements, or as a fixed register set, with instructions operating on explicitly designated registers. The TOP field in the status word identifies the current top-of-stack register. A ``push'' operation decrements TOP by one and loads a value into the new TOP register. A ``pop'' operation stores the value from the current top register and then increments TOP by one. Like the 80386 DX microprocessor stacks in memory, the MCP register stack grows ``down'' toward lower-addressed registers. Instructions may address the data registers either implicitly or explicitly. The explicit register addressing is also relative to TOP. A notable feature of the 80387 is the addition of a tag field of 2 bits to each of the eight floating point registers. The tag word marks the content of each numeric data register, as Figure 2.1 shows. Each two-bit tag represents one of the eight numeric registers. The principal function of the tag word is to optimize the MCP's performance and stack handling by making it possible to distinguish between empty and nonempty register locations. It also enables exception handlers to check the contents of a stack location without the need to perform complex decoding of the actual data. Evaluation of the two Coprocessor I started this paper thinking that the Motorola math coprocessor had to be better in design, implementation and features than its Intel counterpart. Throughout my research I came to realize that my opinions were based on nothing but myths. In many respects the two coprocessors are very similar to each other, while in other respects the coprocessors differ radically in design and implementation. I will sum up the points I consider most important. 1. Intel uses a synchronous bus between the CPU and the MCP, while the actual internal floating unit can run asynchronously to this. This increases complexity of the design as synchronization logic must exist between the two processors, but like this the floating point unit can run at a higher clock speed than the CPU upon installation of a dedicated clock generator. 2. The (logical, not physical) addition of tag fields to the data registers in the 80387 to signal certain conditions of the data registers makes certain operations that support tags much faster, as certain information does not need to be decoded as it is "cached" in the tag fields. 3. The 80387 can use its registers either in stack mode or absolute addressing mode. Though some operations require stack addressing, this feature adds a little more flexibility to the MCP (even though the stack operations might be a legacy from the 8087 or 80287). In most other fields, the coprocessors are equals. They have the same number of data registers, both add their own instruction set and registers to programmers in a transparent fashion and both support the same IEEE numeric representation standards. Probably both coprocessors have similar processing power at equal clockspeed as well. Even though the Motorola coprocessor seems to be superior by name, I have to admit that the 80387 gets my vote for more flexibility and thoughtful optimizations (tags). f:\12000 essays\technology & computers (295)\Computer 2.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ sdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdf f:\12000 essays\technology & computers (295)\Computer Assignment.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computers in Education The typical school has 1 computer per 20 students, a ratio that computer educators feel is still not high enough to affect classroom learning as much as books and classroom conversation. Some critics see computer education as merely the latest in a series of unsuccessful attempts to revolutionise education through the use of audio- and visually-oriented non print media. For example, motion pictures, broadcast television, filmstrips, audio recorders, and videotapes were all initially heralded for their instructional potential, but each of these ultimately became minor classroom tools alongside conventional methods. Communications Satellite A communications satellite is an artificial SATELLITE placed into orbit around the Earth to facilitate communications on Earth. Most long-distance radio communication across land is sent via MICROWAVE relay towers. In effect, a satellite serves as a tall microwave tower to permit direct transmission between stations, but it can interconnect any number of stations that are included within the antenna beams of the satellite rather than simply the two ends of the microwave link. Computer Crime Computer crime is defined as any crime involving a computer accomplished through the use or knowledge of computer technology. Computers are objects of crime when they or their contents are damaged, as when terrorists attack computer centres with explosives or gasoline, or when a "computer virus" a program capable of altering or erasing computer memory is introduced into a computer system. Personal Computer A personal computer is a computer that is based on a microprocessor, a small semiconductor chip that performs the operations of a c.p.u. Personal computers are single-user machines, whereas larger computers generally have multiple users. Personal computers have many uses such as: Word processing, communicating to other computers over a phone line using a modem,databases,leisure games are just some of the uses of a Personal Computer. Computers for Leisure Games As they proliferated, video games gained colour and complexity and adopted the basic theme that most of them still exhibit: the violent annihilation of an enemy by means of one's skill at moving a lever or pushing a button. Many of the games played on home computers are more or less identical with those in video arcades. Increasingly, however, computer games are becoming more sophisticated, more difficult, and no longer dependent on elapsed time a few computer games go on for many hours. Graphics have improved to the point where they almost resemble movies rather than rough, jagged video screens of past games. Some of the newest arcade games generate their graphics through C.D R.O.M. Many include complicated sounds, some even have music and real actors. Given an imaginative programmer, a sophisticated video game has the potential for offering an almost limitless array of exotic worlds and fantastic situations. In the early 90s parents and government were becoming increasingly aware of violence in video games so they introduced warnings on the box like in the movies. f:\12000 essays\technology & computers (295)\Computer Building.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The computer we are trying to buy has a: Cyrix 166+ Motherboard and CPU 32 megs of RAM 2.1 gig Hard Drive USRobotic x2 56k modem (the best normal modem you can buy) Logitech mouse, preferably the $80 Trackman Marble (best mouse you can buy) Sound Blaster 64 AWE sound card (best on the market) Creative Labs 12x Cd Drive 3.5" floppy drive PCI 2 meg 64-bit graphics card 17" monitor (big one) The first place I went to was InfoCastle Computers at www.infocastle.com (not local). They seem to have fair prices, but nothing to get excited about. Once you've seen some of the places out of Sacramento, you realize that buying local is good because: A. Their prices are usually better B. You don't have to pay the shipping C. You can see what you're buying before you pay for it Anyway, here are the InfoCastle prices: RAM 4X32-16 MB 60 ns $ 100.00 x 2=$200 Hard Drive Western Digital 2.1 Gb IDE Caviar $ 250 CPU CyrixP166+ (133 Mz clock) $ 169.9 Modems 33.6 Apache Modem$ 109.95 Mice Logitech Mouse$ 23.95 Sound card Sound Conductor$ 39.00 CD-Rom SONY 8x $ 129.00 Floppy Disks NEC $ 29.00 PCI Video Card 2 Mb Trident $ 59.00 (not 64-bit though) Keyboard $11 As this place didn't have cases, motherboards, or monitors we'll price them at $40, $150, and $300 respectively, because these are the average pricesfor these items. Total price for the desired computer here was about $1530 plus shipping, which would end up at about $1600 total, but this machine wouldn't have as good of a CD drive, sound card, modem, or monitor (15 incher) as the preferred computer would, and costs $100 more... This isn't the one for us. The next place visited was the ComputerSmith's Parts Place. They seem to have much better prices than the last place, InfoCastle, and their web page is nicer. Their URL is www.websmiths.com/csmiths/ . Computersmith's Creative Labs 16 Sound Card $52 Western Digital 2.5gb Hard Drive $247 (bigger but cheaper!?) 18 x CD-ROM $124 Mini-Mid Tower Case 230 wt Power Supply $42 AMD 5K86 P-166 CPU $129 (it's not as good as the Cyrix, but it'll do) Keytronic 104 Key $25 16mb 4x32 RAM $83 x 2 = $166 56K US Robotics Internal Modem $195 Microsoft Intellimouse $59 (no Logitech so Microsoft, oh yech) 17 inch SVGA Flat Screen Monitor $485 (nicer than the one on trhe prefered computer) Pentium P-5 Intel Triton III 512K Motherboard $118 Matrox Mystique 2mb $109 (this is better than the one in the prefered computer!!!) 3.5 Floppy Drive 1.44 meg $25 Canon BJC-4200 InkJet Printer $259 (I put this in for fun, to get an idea of how much a quality printer costs) The total cost for the computer with a lower grade mouse, CPU, and sound card but higher quality video card, CD drive, monitor, and hard drive would be $1776. With the CD drive and all this is a great price for this machine but not what we want. What we want is the Preferred Computer The best prices I found to build a Cyrix 6x86 166+ with a 2 meg video card, 32 megs of RAM, a Creative Labs 12x CD Drive. a Sound Blaster AWE 64, a Logitech Trackman Marble (best mouse on the market), a 56k x2 USR modem, a 1.44 meg and a 2.1 gig hard drive, and a 17" monitor is as follows: Cyrix 6x86 166+ Motherboard and Chip w/1 meg S3 Trio 64V+ graphics card + medium tower case $289 32 megs of RAM $138 Creative Sound Blaster 64 and CD Drive $269 Logitech Trackman Marble $69 x2 Modem $179 1.44 Floppy Drive $27 2.1 Gig Hard Drive $199 17" monitor $329 Total: $1499 + tax This computer was put together by the good people at Ben and Son Computers. For all your computing needs, call (916)637-4515 and ask for Ben. You tell him what you want, how much money you have for it, and he'll make it happen speedy-like. I got these prices out of the April issue of California Computer News, which you can get at any local grocery or computer store, and best of all: it's free!!! But come to Ben for the best prices. Word from Ben: Buying computer parts is just like anything buying anyting else. You need to know where to look, what to look for, and how much a good price is. You DON'T have to settle on a terrible price just because you don't know what a good one is. I've done much research over 8 years and I know what to look for, and if you would like some advise on buying your machine, give me a call. f:\12000 essays\technology & computers (295)\Computer Communications.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ NICATIONSBus NetworkBus Network, in computer science, a topology (configuration) for a local area network in which all nodes are connected to a main communications line (bus). On a bus network, each node monitors activity on the line. Messages are detected by all nodes but are accepted only by the node(s) to which they are addressed. Because a bus network relies on a common data "highway," a malfunctioning node simply ceases to communicate; it doesn't disrupt operation as it might on a ring network, in which messages are passed from one node to the next. To avoid collisions that occur when two or more nodes try to use the line at the same time, bus networks commonly rely on collision detection or Token Passing to regulate traffic.Star NetworkStar Network, in computer science, a local area network in which each device (node) is connected to a central computer in a star-shaped configuration (topology); commonly, a network consisting of a central computer (the hub) surrounded by terminals. In a star network, messages pass directly from a node to the central computer, which handles any further routing (as to another node) that might be necessary. A star network is reliable in the sense that a node can fail without affecting any other node on the network. Its weakness, however, is that failure of the central computer results in a shutdown of the entire network. And because each node is individually wired to the hub, cabling costs can be high.Ring networkRing Network, in computer science, a local area network in which devices (nodes) are connected in a closed loop, or ring. Messages in a ring network pass in one direction, from node to node. As a message travels around the ring, each node examines the destination address attached to the message. If the address is the same as the address assigned to the node, the node accepts the message; otherwise, it regenerates the signal and passes the message along to the next node in the circle. Such regeneration allows a ring network to cover larger distances than star and bus networks. It can also be designed to bypass any malfunctioning or failed node. Because of the closed loop, however, new nodes can be difficult to add. A ring network is diagrammed below.Asynchrous Transfer ModeATM is a new networking technology standard for high-speed, high-capacity voice, data, text andvideo transmission that will soon transform the way businesses and all types of organizationscommunicate. It will enable the management of information, integration of systems andcommunications between individuals in ways that, to some extent, haven't even been conceived yet. ATM can transmit more than 10 million cells per second,resulting in higher capacity, faster delivery and greater reliability. ATM simplifies information transfer and exchange by compartmentalizing information into uniformsegments called cells. These cells allow any type of information--from voice to video--to betransmitted over almost any type of digitized communications medium (fiber optics, copper wire,cable). This simplification can eliminate the need for redundant local and wide area networks anderadicate the bottlenecks that plague current networking systems. Eventually, global standardizationwill enable information to move from country to country, at least as fast as it now moves from officeto office, in many cases faster.Fiber Distributed Data InterfaceThe Fiber Distributed Data Interface (FDDI) modules from Bay Networks are designed forhigh-performance, high-availability connectivity in support of internetwork topologies that include: Campus or building backbone networks for lower speed LANs Interconnection of mainframes or minicomputers to peripherals LAN interconnection for workstations requiring high-performance networking FDDI is a 100-Mbps token-passing LAN that uses highly reliable fiber-optic media and performsautomatic fault recovery through dual counter-rotating rings. A primary ring supports normal datatransfer while a secondary ring allows for automatic recovery. Bay Networks FDDI supportsstandards-based translation bridging and multiprotocol routing. It is also fully compliant with ANSI,IEEE, and Internet Engineering Task Force (IETF) FDDI specifications.Bay Networks FDDI interface features a high-performance second-generation Motorola FDDI chipset in a design that provides cost-effective high-speed communication over an FDDI network. TheFDDI chip set provides expanded functionality such as transparent and translation bridging as wellas many advanced performance features. Bay Networks FDDI is available in three versions -multimode, single-mode, and hybrid. All versions support a Class A dual attachment or dual homingClass B single attachment.Bay Networks FDDI provides the performance required for the most demanding LAN backboneand high-speed interconnect applications. Forwarding performance over FDDI exceeds 165,000packets per second (pps) in the high-end BLN and BCN. An innovative High-Speed Filters optionfilters packets at wire speed, enabling microprocessor resources to remain dedicated to packetforwarding.Data Compression In GraphicsMPEGMPEG is a group of people that meet under ISO (the International Standards Organization) to generate standards for digital video (sequences of images in time) and audio compression. In particular, they define a compressed bit stream, which implicitly defines a decompressor. However, the compression algorithms are up to the individual manufacturers, and that is where proprietary advantage is obtained within the scope of a publicly available international standard. MPEG meets roughly four times a year for roughly a week each time. In between meetings, a great deal of work is done by the members, so it doesn't all happen at the meetings. The work is organized and planned at the meetings. So far (as of January 1996), MPEG have completed the "Standard of MPEG phase called MPEG I. This defines a bit stream for compressed video and audio optimized to fit into a bandwidth (data rate) of 1.5 Mbits/s. This rate is special because it is the data rate of (uncompressed) audio CD's and DAT's. The standard is in three parts, video, audio, and systems, where the last part gives the integration of the audio and video streams with the proper timestamping to allow synchronization of the two. They have also gotten well into MPEG phase II, whose task is to define a bitstream for video and audio coded at around 3 to 10 Mbits/s.How MPEG I worksFirst off, it starts with a relatively low resolution video sequence (possibly decimated from the original) of about 352 by 240 frames by 30 frames/s, but original high (CD) quality audio. The images are in color, but converted to YUV space, and the two chrominance channels (U and V) are decimated further to 176 by 120 pixels. It turn out that you can get away with a lot less resolution in those channels and not notice it, at least in "natural" (not computer generated) images. The basic scheme is to predict motion from frame to frame in the temporal direction, and then to use DCT's (discrete cosine transforms) to organize the redundancy in the spatial directions. The DCT's are done on 8x8 blocks, and the motion prediction is done in the luminance (Y) channel on 16x16 blocks. In other words, given the 16x16 block in the current frame that you are trying to code, you look for a close match to that block in a previous or future frame (there are backward prediction modes where later frames are sent first to allow interpolating between frames). The DCT coefficients (of either the actual data, or the difference between this block and the close match) are "quantized", which means that you divide them by some value to drop bits off the bottom end. Hopefully, many of the coefficients will then end up being zero. The quantization can change for every "macroblock" (a macroblock is 16x16 of Y and the corresponding 8x8's in both U and V). The results of all of this, which include the DCT coefficients, the motion vectors, and the quantization parameters (and other stuff) is Huffman coded using fixed tables. The DCT coefficients have a special Huffman table that is "two-dimensional" in that one code specifies a run-length of zeros and the non-zero value that ended the run. Also, the motion vectors and the DC DCT components are DPCM (subtracted from the last one) coded. f:\12000 essays\technology & computers (295)\Computer Crime 2.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A report discussing the proposition that computer crime has increased dramatically over the last 10 years. Introduction Computer crime is generally defined as any crime accomplished through special knowledge of computer technology. Increasing instances of white-collar crime involve computers as more businesses automate and the information held by the computers becomes an important asset. Computers can also become objects of crime when they or their contents are damaged, for example when vandals attack the computer itself, or when a "computer virus" (a program capable of altering or erasing computer memory) is introduced into a computer system. As subjects of crime, computers represent the electronic environment in which frauds are programmed and executed; an example is the transfer of money balances in accounts to perpetrators' accounts for withdrawal. Computers are instruments of crime when they are used to plan or control such criminal acts. Examples of these types of crimes are complex embezzlements that might occur over long periods of time, or when a computer operator uses a computer to steal or alter valuable information from an employer. Variety and Extent Since the first cases were reported in 1958, computers have been used for most kinds of crime, including fraud, theft, embezzlement, burglary, sabotage, espionage, murder, and forgery. One study of 1,500 computer crimes established that most of them were committed by trusted computer users within businesses i.e. persons with the requisite skills, knowledge, access, and resources. Much of known computer crime has consisted of entering false data into computers. This method of computer crime is simpler and safer than the complex process of writing a program to change data already in the computer. Now that personal computers with the ability to communicate by telephone are prevalent in our society, increasing numbers of crimes have been perpetrated by computer hobbyists, known as "hackers," who display a high level of technical expertise. These "hackers" are able to manipulate various communications systems so that their interference with other computer systems is hidden and their real identity is difficult to trace. The crimes committed by most "hackers" consist mainly of simple but costly electronic trespassing, copyrighted-information piracy, and vandalism. There is also evidence that organised professional criminals have been attacking and using computer systems as they find their old activities and environments being automated. Another area of grave concern to both the operators and users of computer systems is the increasing prevalence of computer viruses. A computer virus is generally defined as any sort of destructive computer program, though the term is usually reserved for the most dangerous ones. The ethos of a computer virus is an intent to cause damage, "akin to vandalism on a small scale, or terrorism on a grand scale." There are many ways in which viruses can be spread. A virus can be introduced to networked computers thereby infecting every computer on the network or by sharing disks between computers. As more home users now have access to modems, bulletin board systems where users may download software have increasingly become the target of viruses. Viruses cause damage by either attacking another file or by simply filling up the computer's memory or by using up the computer's processor power. There are a number of different types of viruses, but one of the factors common to most of them is that they all copy themselves (or parts of themselves). Viruses are, in essence, self-replicating. We will now consider a "pseudo-virus," called a worm. People in the computer industry do not agree on the distinctions between worms and viruses. Regardless, a worm is a program specifically designed to move through networks. A worm may have constructive purposes, such as to find machines with free resources that could be more efficiently used, but usually a worm is used to disable or slow down computers. More specifically, worms are defined as, "computer virus programs ... [which] propagate on a computer network without the aid of an unwitting human accomplice. These programs move of their own volition based upon stored knowledge of the network structure." Another type of virus is the "Trojan Horse." These viruses hide inside another seemingly harmless program and once the Trojan Horse program is used on the computer system, the virus spreads. One of the most famous virus types of recent years is the Time Bomb, which is a delayed action virus of some type. This type of virus has gained notoriety as a result of the Michelangelo virus. This virus was designed to erase the hard drives of people using IBM compatible computers on the artist's birthday. Michelangelo was so prevalent that it was even distributed accidentally by some software publishers when the software developers' computers became infected. SYSOPs must also worry about being liable to their users as a result of viruses which cause a disruption in service. Viruses can cause a disruption in service or service can be suspended to prevent the spread of a virus. If the SYSOP has guaranteed to provide continuous service then any disruption in service could result in a breach of contract and litigation could ensue. However, contract provisions could provide for excuse or deferral of obligation in the event of disruption of service by a virus. Legislation The first federal computer crime law, entitled the Counterfeit Access Device and Computer Fraud and Abuse Act of 1984, was passed in October of 1984. The Act made it a felony to knowingly access a computer without authorisation, or in excess of authorisation, in order to obtain classified United States defence or foreign relations information with the intent or reason to believe that such information would be used to harm the United States or to advantage a foreign nation. The act also attempted to protect financial data. Attempted access to obtain information from financial records of a financial institution or in a consumer file of a credit reporting agency was also outlawed. Access to use, destroy, modify or disclose information found in a computer system, (as well as to prevent authorised use of any computer used for government business) was also made illegal. The 1984 Act had several shortcomings, and was revised in The Computer Fraud and Abuse Act of 1986. Three new crimes were added to the 1986 Act. These were a computer fraud offence, modelled after federal mail and wire fraud statutes, an offence for the alteration, damage or destruction of information contained in a "federal interest computer", an offence for trafficking in computer passwords under some circumstances. Even the knowing and intentional possession of a sufficient amount of counterfeit or unauthorised "access devices" is illegal. This statute has been interpreted to cover computer passwords "which may be used to access computers to wrongfully obtain things of value, such as telephone and credit card services." Remedies and Law Enforcement Business crimes of all types are probably decreasing as a direct result of increasing automation. When a business activity is carried out with computer and communications systems, data are better protected against modification, destruction, disclosure, misappropriation, misrepresentation, and contamination. Computers impose a discipline on information workers and facilitate use of almost perfect automated controls that were never possible when these had to be applied by the workers themselves under management edict. Computer hardware and software manufacturers are also designing computer systems and programs that are more resistant to tampering. Recent U.S. legislation, including laws concerning privacy, credit card fraud and racketeering, provide criminal-justice agencies with tools to fight business crime. As of 1988, all but two states had specific computer-crime laws, and a federal computer-crime law (1986) deals with certain crimes involving computers in different states and in government activities. Conclusion There are no valid statistics about the extent of computer crime. Victims often resist reporting suspected cases, because they can lose more from embarrassment, lost reputation, litigation, and other consequential losses than from the acts themselves. Limited evidence indicates that the number of cases is rising each year because of the increasing number of computers in business applications where crime has traditionally occurred. The largest recorded crimes involving insurance, banking, product inventories, and securities have resulted in losses of tens of millions to billions of dollars and all these crimes were facilitated by computers. Bibliography Bequai, August, Techno Crimes (1986). Mungo, Paul, and Clough, Bryan, Approaching Zero: The Extraordinary Underworld of Hackers, Phreakers, Virus Writers, and Keyboard Criminals (1993). Norman, Adrian R. D., Computer Insecurity (1983). Parker, Donn B., Fighting Computer Crime (1983). Dodd S. Griffith, The Computer Fraud and Abuse Act of 1986: A Measured Response to a Growing Problem, 43 Vand. L. Rev. 453, 455 (1990). f:\12000 essays\technology & computers (295)\Computer Crime 3.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Crime by: Manik Saini Advances in telecommunications and in computer technology have brought us to the information revolution. The rapid advancement of the telephone, cable, satellite and computer networks, combined with the help of technological breakthroughs in computer processing speed, and information storage, has lead us to the latest revolution, and also the newest style of crime, "computer crime". The following information will provide you with evidence that without reasonable doubt, computer crime is on the increase in the following areas: hackers, hardware theft, software piracy and the information highway. This information is gathered from expert sources such as researchers, journalists, and others involved in the field. Computer crimes are often heard a lot about in the news. When you ask someone why he/she robbed banks, they world replied, "Because that's where the money is." Today's criminals have learned where the money is. Instead of settling for a few thousand dollars in a bank robbery, those with enough computer knowledge can walk away from a computer crime with many millions. The National Computer Crimes Squad estimates that between 85 and 97 percent of computer crimes are not even detected. Fewer than 10 percent of all computer crimes are reported this is mainly because organizations fear that their employees, clients, and stockholders will lose faith in them if they admit that their computers have been attacked. And few of the crimes that are reported are ever solved. Hacking was once a term that was used to describe someone with a great deal of knowledge with computers. Since then the definition has seriously changed. In every neighborhood there are criminals, so you could say that hackers are the criminals of the computers around us. There has been a great increase in the number of computer break-ins since the Internet became popular. How serious is hacking? In 1989, the Computer Emergency Response Team, a organization that monitors computer security issues in North America said that they had 132 cases involving computer break-ins. In 1994 alone they had some 2,341 cases, that's almost an 1800% increase in just 5 years. An example is 31 year old computer expert Kevin Mitnick that was arrested by the FBI for stealing more then $1 million worth in data and about 20,000 credit card numbers through the Internet. In Vancouver, the RCMP have arrested a teenager with breaking into a university computer network. There have been many cases of computer hacking, another one took place here in Toronto, when Adam Shiffman was charged with nine counts of fraudulent use of computers and eleven counts of mischief to data, this all carries a maximum sentence of 10 years in jail. We see after reading the above information that hacking has been on the increase. With hundreds of cases every year dealing with hacking this is surely a problem, and a problem that is increasing very quickly. Ten years ago hardware theft was almost impossible, this was because of the size and weight of the computer components. Also computer components were expensive so many companies would have security guards to protect them from theft. Today this is no longer the case, computer hardware theft is on the increase. Since the invention of the microchip, computers have become much smaller and easier to steal, and now even with portable and lap top computers that fit in you briefcase it's even easier. While illegal high-tech information hacking gets all the attention, it's the computer hardware theft that has become the latest in corporate crime. Access to valuable equipment skyrockets and black-market demand for parts increases. In factories, components are stolen from assembly lines for underground resale to distributors. In offices, entire systems are snatched from desktops by individuals seeking to install a home PC. In 1994, Santa Clara, Calif., recorded 51 burglaries. That number doubled in just the first six months of 1995. Gunmen robbed workers at Irvine, Calif., computer parts company, stealing $12 million worth of computer chips. At a large advertising agency in London, thieves came in over a weekend and took 96 workstations, leaving the company to recover from an $800,000 loss. A Chicago manufacturer had computer parts stolen from the back of a delivery van as he was waiting to enter the loading dock. It took less then two minutes for the doors to open, but that was enough time for thieves to get away with thousands of computer components. Hardware theft has sure become a problem in the last few years, with cases popping up each day we see that hardware theft is on the increase. As the network of computers gets bigger so will the number of software thief's. Electronic software theft over the Internet and other online services and cost the US software companies about $2.2 billion a year. The Business Software Alliance shows that number of countries were surveyed in 1994, resulting in piracy estimated for 77 countries, totaling more than $15.2 billion in losses. Dollar loss estimates due to software piracy in the 54 countries surveyed last year show an increase of $2.1 billion, from $12.8 billion in 1993 to $14.9 billion in 1994. An additional 23 countries surveyed this year brings the 1994 worldwide total to $15.2 billion. As we can see that software piracy is on the increase with such big numbers. Many say that the Internet is great, that is true, but there's also the bad side of the Internet that is hardly ever noticed. The crime on the Internet is increasing dramatically. Many say that copyright law, privacy law, broadcasting law and law against spreading hatred means nothing. There's many different kinds of crime on the Internet, such as child pornography, credit card fraud, software piracy, invading privacy and spreading hatred. There have been many cases of child pornography on the Internet, this is mainly because people find it very easy to transfer images over the Internet without getting caught. Child pornography on the Internet has more the doubled on the Internet since 1990, an example of this is Alan Norton of Calgary who was charged of being part of an international porn ring. Credit card fraud has caused many problems for people and for corporations that have credit information in their databases. With banks going on-line in last few years, criminals have found ways of breaking into databases and stealing thousands of credit cards and information on their clients. In the past few years thousands of clients have reported millions of transactions made on credit cards that they do not know of. Invading privacy is a real problem with the Internet, this is one of the things that turns many away from the Internet. Now with hacking sites on the Internet, it is easy to download Electronic Mail(e-mail) readers that allows you to hack servers and read incoming mail from others. Many sites now have these e-mail readers and since then invading privacy has increased. Spreading hatred has also become a problem on the Internet. This information can be easily accessed by going to any search engine for example http://www.webcrawler.com and searching for "KKK" and this will bring up thousands of sites that contain information on the "KKK". As we can see with the freedom on the Internet, people can easily incite hatred over the Internet. After reading that information we see that the Internet has crime going on of all kinds. The above information provides you with enough proof that no doubt computer crime is on the increase in many areas such as hacking, hardware theft, software piracy and the Internet. Hacking can be seen in everyday news and how big corporations are often victims to hackers. Hardware theft has become more popular because of the value of the computer components. Software piracy is a huge problem, as you can see about $15 billion are lost each year. Finally the Internet is good and bad, but theirs a lot more bad then good, with credit card fraud and child pornography going on. We see that computer crime is on the increase and something must be done to stop it. f:\12000 essays\technology & computers (295)\Computer Crime 5.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Crime Computer crimes need to be prevented and halted thought increased computer network security measures as well as tougher laws and enforcement of those laws in cyberspace: Computer crime is generally defined as any crime accomplished through special knowledge of computer technology. All that is required is a personal computer, a modem, and a phone line. Increasing instances of white-collar crime involve computers as more businesses automate and information becomes an important asset. Computers are objects of crime when they or their contents are damaged, as when terrorists attack computer centers with explosives or gasoline, or when a "computer virus"--a program capable of altering or erasing computer memory--is introduced into a computer system. As subjects of crime, computers represent the electronic environment in which frauds are programmed and executed; an example is the transfer of money balances in accounts to perpetrators' accounts for withdrawal. Computers are instruments of crime when used to plan or control such criminal acts as complex embezzlements that might occur over long periods of time, or when a computer operator uses a computer to steal valuable information from an employer. Computers have been used for most kinds of crime, including fraud, theft, larceny, embezzlement, burglary, sabotage, espionage, murder, and forgery, since the first cases were reported in 1958. One study of 1,500 computer crimes established that most of them were committed by trusted computer users within businesses; persons with the requisite skills, knowledge, access, and resources. Much of known computer crime has consisted of entering false data into computers, which is simpler and safer than the complex process of writing a program to change data already in the computer. With the advent of personal computers to manipulate information and access computers by telephone, increasing numbers of crimes--mostly simple but costly electronic trespassing, copyrighted-information piracy, and vandalism--have been perpetrated by computer hobbyists, known as "hackers," who display a high level of technical expertise. For many years, the term hacker defined someone who was a wizard with computers and programing. It was an honor to be considered a hacker. But when a few hackers began to use their skills to break into private computer systems and steal money, or interfere with the system's operations, the word acquired its current negative meaning. Organized professional criminals have been attacking and using computer systems as they find their old activities and environments being automated. There are not a large number of valid statistics about the extent and results of computer crime. Victims often resist reporting suspected cases, because they can lose more from embarrassment, lost reputation, litigation, and other consequential losses than from the acts themselves. Limited evidence indicates that the number of cases is rising each year, because of the increasing number of computers in business applications where crime has traditionally occurred. The largest recorded crimes involving insurance, banking, product inventories, and securities have resulted in losses of tens of millions to billions of dollars--all facilitated by computers. Conservative estimates have quoted $3 billion to $100 billion as yearly losses due to computer hackers. These losses are increasing on a basis equivalent to the number of computers logged on to networks, which is almost an exponential growth rate. The seriousness of cybercrimes also increases as the dependancy on computers becomes greater and greater. Crimes in cyberspace are becoming more and more popular for several reason. The first being that computers are become more and more accessible; thus are just become another tool in the arsenal of tool to criminals. The other reason that computer crimes are becoming more and more common are that they are sometimes very profitable. The average computer crime nets a total of $650,000 (American, 1991 standards); more than seventy-two times that of the average bank robbery. Today's Techno bandits generally fall one of three groups, listed in the order of the threat they pose: 1. Current or former computer operations employees. 2. Career criminals who use computers to ply their trade. 3. The hacker. Outsiders who break into computer systems are sometimes more of a threat, but employees and ex-employees are usually in a better position to steal. Because we rely more and more on computers, we also depend on those who make them and run them. The U.S. Bureau of Labor Statistics projects that the fastest growing employment opportunities are in the field computers and data-processing. Since money is a common motive for those who use their computing know-how to break the law, losses from computer theft are expected to grow as the number of computer employees rises. The following are examples of how employees that work on computers can gain profit at the employers expense: In 1980, two enterprising ticket agents for TransWorld Airlines (TWA) discovered how to make their employer's computer work for them. The scam went like this: When a passenger used cash to pay for a one-way ticket, Vince Giovengo sent in the credit change form, which should have been discarded. He kept the receipt the should have been given to the costumer for paying cash. Samuel Paladina, who helped board passengers, kept the part of the traveler's ticket that should have been returned to the costumer. The two agents used computers to reassemble the ticket from the pieces they had. They then marked the ticket void, and kept the cash the traveler had paid. The swindle was finally discovered by another employee who questioned the large number of voided tickets, only after approximately $93,000 was taken. The two TWA employees were tried in the United States and were convicted of federal wire fraud in the United States. They each received six months in prison as a result of their crime. The penalty they had to pay should have been much, much higher, in order to prevent computers from being used in crimes in the future. This is true for not just the United States and Canada, but for every country in the world. Another computer heist, one of the largest ever, involved several highly placed employees of the Volkswagen car company of West Germany. In 1987 the company discovered that these "loyal" workers had managed to steal $260 million by re- programming the computers to disguise the company's foreign currency transactions. The workers that committed the crime received 10 years in Germany's prison system; not nearly as harsh as if they had stolen the money through non- computerized means. This sets an example that computer crimes are easy to execute, and are punished very lightly. This will evoke a downward spiral; leading to more and more computer crimes. For career criminals, computers represent a new medium for their illegal actions. Computers only enhance the speed and quality of the crimes. Now the professional criminals can steal or commit almost any other crime they want, simply by the means of typing directions into the computer. Computers are quickly being added to the list of the tools of crime. Hackers often act in groups. The actions of several groups of hackers, most notable are the Masters of Deception (MOD) and the Legion of Doom, have been exposed in the media recently. These groups and most malicious hackers are involved in computer crime for the profit available to them. Individual hackers are often male teenagers. They comprise the majority of the computer criminals, although they do pose a major threat to society's computer uses. Computer criminals have various reasons for doing what they do. The main reason computer crimes are being committed on the large scale that they are is mainly for the profit. As earlier stated, the average computer crime nets more than seventy-two times that of the average bank robbery. This is seen for many skilled computer operators as a opportunity to make some quick profit. Cybercrimes are usually only committed by people that would not commit any other type of crime. This shows that the chances of getting caught and punished are perceived as very low, a view that must be changed. Some hackers feel that it is their social responsibly to keep the cyberspace a free domain, with out authorities. This is accomplished by sharing information, and removing the concept of property in cyberspace. Therefore, they feel that it is proper to take information and share it. In their minds, they have committed no crime, but from the victim's eyes, they deserve to be punished. Other hackers think that it is a challenge to read other's files and see how far they can penetrate into a system. It is pure enjoyment for these hackers to explore a new computer network. It becomes a challenge to gain access to strange, new networks. They often argue that they are only curious and cause no harm by merely exploring, but that is not always the case. Where did my homework files go? Who is making charges to my credit card? Sounds like someone is out for revenge. Computer have become a modern day tool for seeking revenge. Here is an example: A computer system operator was fired from CompuServe (a major Internet provider) and by the next day his former manager's credit card numbers had been distributed to thousands of people via electronic bulletin boards. The manager's telephone account had been charged with thousands of long-distance phone calls, and his driver's license had been issued with hundreds of unpaid tickets. This shows the awesome power of a knowledgeable hacker. Also, these hackers try to maintain free services. Some of these free services include Internet access, and long distance telephone access. Banks and brokerage houses are major targets when stealing money is the objective, because of their increased reliance on electronic funds transfer (EFT). Using EFT, financial institutions and federal and provincial governments pass billions of dollars of funds and assets back and forth over the phone lines every day. The money transferred from bank to bank or account to account, is used by sending telephone messages between computers. In the old days, B. [efore] C. [omputers], transferring money usually involved armored cars and security guards. Today, the computer operators simply type in the appropriate instructions and funds are zipped across telephone lines from bank A to bank B. These codes can be intercepted by hackers and used to gain credit card numbers, ATM and personal identification numbers, as well as the actual money being transferred. With the ATM and credit card numbers, they have access to all the money in the corresponding accounts. The act of changing data going into a computer or during the output from the computer is called "data diddling". One New Jersey bank suffered a $128,000 loss when the manager of computer operations made some changes in the account balances. He transferred the money to accounts of three of his friends. "Salami slicing" is a form of data diddling that occurs when an employee steals small amounts of money from a large number of sources though the electronic changing of data (like slicing thin pieces from a roll of salami). For example, in a bank, the interest paid into accounts may routinely be rounded to the nearest cent. A dishonest computer programer may change the program so that all the fractions of the cents left over go into his account. This type of theft is hard to detect because the books balance. The sum of money can be extremely large, when taken from thousands of accounts over a period of time. "Phone Phreaks" were the first hackers. They are criminals that break into the telephone system though many various mean to gain free access to the telephone network. Since telephone companies use large and powerful computers to route their calls, they are an oblivious target to hackers. Stealing information in the form of software (computer programs) is also illegal. Making copies of commercial software, for re-sale or to give to others is a crime. These types of crime represent the largest growing area of computer crime. In one case, a group of teenagers pretending to be a software firm, sold $350,000 dollars worth of stolen software to a Swiss electronics company. While most hackers claim curiosity and a desire for profit as motives for cracking computer systems, a few "dark-side hackers" seem to intentionally harm others. For these individuals, computers are convenient tools of wickedness. One crazed hacker broke into the North American Air Defense computer system and the U.S. Armies MASNET computer network. While browsing the files, officials say, he had the ability to launch missiles at the USSR. This could have led the a nuclear war, and possibly the destruction of the world. There are more than 1200 bugs out there, and the infections spread put the victim out of action until the healing process begins (if there is a healing process). This may sound like a description of the common cold or flu virus, except that the virus does not attack people. This bug is made by human hands and it attacks computers. It is spread though shared software, almost as easily as a sneeze, and it can be every bit as weakening as the flu. Around the world on March 6, 1992, computer users reported for work only to find that their computers didn't work. The machines had "crashed" due to Michelangelo. This was a computer virus that was set to destroy all infected computers because it was set to go off on the renaissance artist's 517th birthday. Approximately 10,000 computers were hit world wide. The virus disabled the computers causing millions of dollars worth of down-time and lost data. Computer crimes are becoming more and more dangerous. New laws and methods of enforcement need to be created; the evidence is above. An effort is being made by governments, but it is not enough. The problem is a international affair, and should be treated as such. Current Canadian laws are some of the most lenient in industrialized counties. They were also in place much later than other countries put their laws about computer crime into effect, when compared to the United States and Japan. The Criminal Law Amendment Act, 1985 included a number of specific computer crime- related offences. Now, for the first time, Canadian law enforcement agencies can lay charges relating to cybercrime. The following text is an excerpt from the Martin's Annual Canadian Criminal Code, 1995 edition: 326. (1) Every one commits theft who fraudulently, maliciously, or without colour of right, (b) uses any telecommunication facility or obtains any telecommunication service. (2)In this section and section 327, "telecommunication" means any transmission, emission or reception of signs, signals, writing, images or sounds or intelligence of any nature by wire, radio, visual, or any other electro-magnetic system. 342. 1 (1) Every one who, fraudulently and without colour of right, (a) obtains, directly or indirectly, any computer service, (b) by means if an electro-magnetic, acoustic, mechanical or other device, intercepts or causes to be intercepted, directly or indirectly, any function of a computer system, or (c) Uses or causes to be used, directly of indirectly, a computer system with intent to commit and offence under paragraph (a) or (b) or an offence under section 430 in relation to data or a computer system is guilty of an indictable offence and liable to imprisonment for a term not exceeding ten years, or is guilty to an offence punishable on summary conviction. 430. (1.1) Every one commits mischief who willfully (a) destroys or alters data; (b) renders data meaningless, useless or ineffective; (c) obstructs, interrupts or interferes with any person in the lawful use of data; or (d) obstructs, interrupts of interferes with the lawful use of data or denies access to data to any person who is entitled to access thereto. These Canadian are already outdated, and they are only eleven years old. They need to be amended to include stiffer penalties. At the time of the creation of the laws in 1985, they were deemed adequate, because computers crimes were not looked upon as a serious issue with far-reaching effects. In 1996, computer crime has become a damaging and dangerous part of life. It is now necessary to revamp these laws to include young offenders. The young people committing some of these crimes have very detailed knowledge of computer systems and computer programing. If they can handle this type of knowledge, and commit these crimes, they should be able to foresee the consequences of their actions. Most young hackers feel that they are bright, and therefore should be able understand the results of their actions on other's computers and computer systems. The laws should treat these young offenders like adults, because they realize what they are doing is wrong, and should suffer the consequences. Some of the computer crimes listed in the criminal code are only summary offences; thus are not considered very serious. This spreads the message to hackers that the crimes are not serious, but they are. Since the hackers don't view the crimes as serious, they are likely to commit more of them. If the consequences of breaking any laws referring to computer crime were made tougher, hackers would realize what they are doing is wrong. They will also see other hackers being charged with offences under the criminal code ans figure out that they may be next on the list to be punished for their actions. Not only do these laws need to be made tougher, they need to be enforced consistently. The Authorities must from all countries must have a conference to discuss the need for consistent enforcement of the law referring to computer crime. This is because computer crimes are truly international. A hacker in Canada may break into a bank in Switzerland. Does the criminal get punished by the laws of the U.S. or by the laws of Switzerland? This needs to be decided upon. The authorities must have special operations to stop computer crimes, just as they do for drug trafficking. Much time must be devoted to stopping these crimes, before it leads to disaster. The problem is getting out of hand and the public must actively participate in cooperation with the authorities in order to bring the problem under control. Security is one matter that we can take into our own hands. Until new laws are created and enforced, it is up to the general computer-using public to protect themselves. The use of passwords, secure access multiports and common sense can prevent computer crimes making victims of us all. Passwords can add a remarkable amount of security to a computer system. The cases where pass words have been cracked are rare. Included with the password program must be a access restriction accessory. This limits the number of guesses at a password to only a few tries, thus effectively eliminating 98% of intruders. Secure access multiports (SAM's) are the best protection against computer crime making a victim out of a computer user. When joining a network, the user calls in and enters password and access code. The user is then disconnected from the network. If the network recognizes the user as a valid one, it will call the user back and allow he/she to log on. If the user was invalid, the network will not attempt to re-connect with the user (see figure 1). This prevents unwanted persons from logging on to a network. Common sense is often the best defense against computer crime. Simple things like checking and disinfecting for viruses on a regular basis, not sharing your password or giving out your credit card number on online services (ie: Internet). Also, employers can restrict access employees have to computers at the place of employment. This would dissipate most computer crimes executed by employees. If new laws and enforcement of those law are not soon established, along with heightened security measures, the world will have a major catastrophe as a result of computer activity. The world is becoming increasingly dependant on computers, and the crimes committed will have greater and greater impact as the dependancy rises. The possible end of the world was narrowly averted, but was caused by a computer crime. The United States defense computer system was broken into, and the opportunity existed for the hacker to declare intercontinental nuclear war; thus leading to death of the human race. Another event like this is likely to occur if laws, enforcement of the laws and security of computers are not beefed up. The greatest creation of all time, the computer, should not lead the destruction of the race that created it. f:\12000 essays\technology & computers (295)\Computer Crime in the 90s.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Crime In The 1990's We're being ushered into the digital frontier. It's a cyberland with incredible promise and untold dangers. Are we prepared ? It's a battle between modern day computer cops and digital hackers. Essentially just think what is controlled by computer systems, virtually everything. By programming a telephone voice mail to repeat the word yes over and over again a hacker has beaten the system. The hacker of the 1990's is increasingly becoming more organized very clear in what they're looking for and very, very sophisticated in their methods of attack.. As hackers have become more sophisticated and more destructive, governments, phone companies and businesses are struggling to defend themselves. Phone Fraud In North America the telecommunications industry estimates long distance fraud costs five hundred million perhaps up to a billion every year, the exact the exact figures are hard to be sure of but in North America alone phone fraud committed by computer hackers costs three, four maybe even up to five billion dollars every year. Making an unwitting company pay for long distance calls is the most popular form of phone fraud today. The first step is to gain access to a private automated branch exchange known as a "PABX" or "PBX". One of these can be found in any company with twenty or more employees. A "PABX" is a computer that manages the phone system including it's voice mail. Once inside a "PABX" a hacker looks for a phone whose voice mail has not yet been programmed, then the hacker cracks it's access code and programs it's voice mail account to accept charges for long distance calls, until the authorities catch on, not for a few days, hackers can use voice mail accounts to make free and untraceable calls to all over the world. The hackers that commit this type of crime are becoming increasingly organized. Known as "call cell operators" they setup flyby night storefronts were people off the street can come in and make long distance calls at a large discount, for the call cell operators of course the calls cost nothing, by hacking into a PABX system they can put all the charges on the victimized companies tab. With a set of stolen voice mail access codes known as "good numbers" hackers can crack into any phone whenever a company disables the phone they're using. In some cases call cell operators have run up hundreds of thousands of dollars in long distance charges, driving businesses and companies straight into bankruptcy. Hacking into a PABX is not as complicated as some people seem to think. The typical scenario that we find is an individual who has a "demon dialer" hooked up to their personal home computer at home that doesn't necessarily need to be a high powered machine at all but simply through the connection of a modem into a telephone line system. Then this "demon dialer" is programmed to subsequently dial with the express purpose of looking for and recording dialtone. A demon dialer is a software program that automatically calls thousands of phone numbers to find ones that are connected to computers. A basic hacker tool that can be downloaded from the internet. They are extremely easy programs to use. The intention is to acquire dialtone, that enables the hacker to move freely through the telephone network. It's generally getting more sinister. We are now seeing a criminal element now involved in term of the crimes they commit, the drugs, money laundering etc. These people are very careful they want to hide their call patterns so they'll hire these people to get codes for them so they can dial from several different calling locations so they cannot be detected. The worlds telephone network is a vast maze, there are many places to hide but once a hacker is located the phone company and police can track their every move. The way they keep track is by means of a device called a "DNR" or a dial number recorder. This device monitors the dialing patterns of any suspected hacker. It lists all the numbers that have been dialed from their location, the duration of the telephone call and the time of disconnection. The process of catching a hacker begins at the phone company's central office were thousands of lines converge to a main frame computer, the technicians can locate the exact line that leads to a suspected hackers phone line by the touch of a button. With the "DNR" device the "computer police" retrieve the number and also why the call was made and if it was made for illegal intention they will take action and this person can be put in prison for up to five years and be fined for up to $ 7500.00. The telephone network is a massive electronic network that depends on thousands of computer run software programs and all this software in theory can be reprogrammed for criminal use. The telephone system is in other words a potentially vulnerable system, by cracking the right codes and inputting the correct passwords a hacker can sabotage a switching system for millions of phones, paralyzing a city with a few keystrokes. Security experts say telephone terrorism poses a threat, society hasn't even begun to fathom ! You have people hacking into systems all the time. There were groups in the U.S.A in 1993 that shutdown three of the four telephone switch stations on the east coast, if they had shutdown the final switch station as well the whole east coast would have been without phones. Things of this nature can happen and have happened in the past. Back in the old days you had mechanical switches doing crossbars, things of that nature. Today all telephone switches are all computerized, they're everywhere. With a computer switch if you take the first word "computer" that's exactly what it is, a switch being operated by a computer. The computer is connected to a modem, so are you and all the hackers therefore you too can run the switches. Our generation is the first to travel within cyberspace, a virtual world that exists with all the computers that form the global net. For most people today cyberspace is still a bewildering and alien place. How computers work and how they affect our lives is still a mystery to all but the experts, but expertise doesn't necessarily guarantee morality. Originally the word hacker meant a computer enthusiasts but now that the internet has revealed it's potential for destruction and profit the hacker has become the outlaw of cyberspace. Not only do hackers commit crimes that cost millions of dollars, they also publicize their illegal techniques on the net where they innocent minds can find them and be seduced by the allure of power and money. This vast electronic neighborhood of bits and bytes has stretched the concepts of law and order. Like handbills stapled to telephone polls the internet appears to defy regulation. The subtleties and nuances of this relatively new form to the words "a gray area" and "right and wrong". Most self described hackers say they have been given a bad name and that they deserve more respect. For the most part they say hackers abide by the law, but when they do steal a password or break into a network they are motivated by a helping desire for knowledge, not for malicious intent. Teenagers are especially attracted by the idea of getting something for nothing. When system managers try to explain to hackers that it is wrong to break into computer systems there is no point because hackers with the aid of a computer possess tremendous power. They cannot be controlled and they have the ability to break into any computer system they feel like. But suppose one day a hacker decides to break into a system owned by a hospital and this computer is in charge of programming the therapy for a patient there if a hacker inputs the incorrect code the therapy can be interfered with and the patient may be seriously hurt. Even though this wasn't done deliberately. These are the type of circumstances that give hackers a bad reputation. Today anyone with a computer and a modem can enter millions of computer systems around the world. On the net they say bits have no boundaries this means a hacker half way around the world can steal passwords and credit card numbers, break into computer systems and plant crippling viruses as easily as if they were just around the corner. The global network allows hackers to reach out and rob distant people with lightning speed. If cyberspace is a type of community, a giant neighborhood made up of networked computer users around the world, then it seems natural that many elements of traditional society can be found taking shape as bits and bytes. With electronic commerce comes electronic merchants, plugged-in educators provide networked education, and doctors meet with patients in offices on-line. IT should come as no surprise that there are also cybercriminals committing cybercrimes. As an unregulated hodgepodge of corporations, individuals, governments, educational institutions, and other organizations that have agreed in principle to use a standard set of communication protocols, the internet is wide open to exploitation. There are no sheriffs on the information highway waiting to zap potential offenders with a radar gun or search for weapons if someone looks suspicious. By almost all accounts, this lack of "law enforcement" leaves net users to regulate each other according to the reigningnorms of the moment. Community standards in cyberspace appear to be vastly different from the standards found at the corner of Markham and Lawrence. Unfortunately, cyberspace is also a virtual tourist trap where faceless, nameless con artists can work the crowds. Mimicking real life, crimes and criminals come in all varieties on the internet. The FBI's National Computer Squad is dedicated to detecting and preventing all types of computer -related crimes. Some issues being carefully studied by everyone from the net veterans and law enforcement agencies to radical crimes include: Computer Network Break-Ins Using software tools installed on a computer in a remote location, hackers can break into any computer systems to steal data, plant viruses or trojan horses, or work mischief of a less serious sort by changing user names or passwords. Network intrusions have been made illegal by the U.S. federal government, but detection and enforcement are difficult. Industrial Espionage Corporations, like governments, love to spy on the enemy. Networked systems provide new opportunities for this , as hackers-for-hire retrieve information about product development and marketing strategies, rarely leaving behind any evidence of the theft. Not only is tracing the criminal labor-intensive, convictions are hard to obtain when laws are not written with electronic theft in mind. Software Piracy According to estimates by U.S. Software Publisher's Association, as much as $7.5 billion of American software may be illegally copied and distributed worldwide. These copies work as well as the originals, and sell for significantly less money. Piracy is relatively easy, and only the largest rings of distributors are usually to serve hard jail time when prisons are overcrowded with people convicted of more serious crimes. Child Pornography This is one crime that is clearly illegal, both on and off the internet. Crackdowns may catch some offenders, but there are still ways to acquire images of children in varying stages of dress and performing a variety of sexual acts. Legally speaking, people who provide access to child porn face the same charges whether the images are digital or on a piece of paper. Trials of network users arrested in a recent FBI bust may challenge the validity of those laws as they apply to online services. Mail Bombings Software can be written that will instruct a computer to do almost anything, and terrorism has hit the internet in the form of mail bombings. By instructing a computer to repeatedly send mail (email) to a specified person's email address, the cybercriminal can overwhelm the recipient's personal account and potentially shut down entire systems. This may not be illegal , but it is certainly disruptive. Password Sniffers Password sniffers are programs that monitor and record the name and password of network users as they log in, jeopardizing security at a site. Whoever installs the sniffer can then impersonate an authorized user and log in to access restricted documents. Laws are not yet up to adequately prosecute a person for impersonating another person on-line, but laws designed to prevent unauthorized access to information may be effective in apprehending hackers using sniffer programs. The Wall Street Journal suggest in recent reports that hackers may have sniffed out passwords used by members of America On-line, a service with more than 3.5 million subscribers. If the reports are accurate, even the president of the service found his account security jeopardized. Spoofing Spoofing is the act of disguising one computer to electronically "look" like another computer in order to gain access to a system that would normally be restricted. Legally, this can be handles in the same manner as password sniffers, but the law will have to change if spoofing is going to be addressed with more than a quick fix solution. Spoofing was used to access valuable documents stored on a computer belonging to security expert Tsutomu Shimomura (security expert of Nintendo U.S.A) Credit Card Fraud The U.S secret service believes that half a billion dollars may be lost annually by customers who have credit card and calling card numbers stolen from on-line databases. Security measures are improving and traditional methods of law enforcement seem to be sufficient for prosecuting the thieves of such information. Bulletin boards and other on-line services are frequent targets for hackers who want to access large databases or credit card information. Such attacks usually result in the implementation of stronger security systems. Since there is no single widely-used definition of computer-related crime, computer network users and law enforcement officials most distinguish between illegal or deliberate network abuse versus behavior that is merely annoying. Legal systems everywhere are busily studying ways of dealing with crimes and criminals on the internet. TABLE OF CONTENTS PHONE FRAUD..........................................Pg1 NETWORK BREAK-INS...........................Pg6 INDUSTRIAL ESPIONAGE......................Pg7 SOFTWARE PIRACY................................Pg7 CHILD PORNOGRAPHY..........................Pg7 MAIL BOMBING.......................................Pg8 PASSWORD SNIFFING............................Pg8 SPOOFING.................................................Pg9 CREDIT CARD FRAUD...........................Pg9 f:\12000 essays\technology & computers (295)\Computer Crime.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A young man sits illuminated only by the light of a computer screen. His fingers dance across the keyboard. While it appears that he is only word processing or playing a game, he may be committing a felony. In the state of Connecticut, computer crime is defined as: 53a-251. Computer Crime (a) Defined. A person commits computer crime when he violates any of the provisions of this section. (b) Unauthorized access to a computer system. (1) A person is guilty of the computer crime of unauthorized access to a computer system when, knowing that he is not authorized to do so, he accesses or causes the be accessed any computer system without authorization... (c) Theft of computer services. A person is guilty of the computer crime o f theft of computer services when he accesses or causes to be accessed or otherwise uses or causes to be used a computer system with the intent to obtain unauthorized computer services. (d) Interruption of computer services. A person is guilty of the computer crime of interruption of computer services when he, without authorization, intentionally or recklessly disrupts or degrades or causes the disruption or degradation of computer services or denies or causes the denial of computer services to an authorized user of a computer system. (e) Misuse of computer system information. A person is guilty of the computer crime of misuse of computer system information when: (1) As a result of his accessing or causing to be accessed a computer system, he intentionally makes or causes to be made an unauthorized display, use, disclosure or copy, in any form, of data residing in, communicated by or produced by a computer system. Penalties for committing computer crime range from a class B misdemeanor to a class B felony. The severity of the penalty is determined based on the monetary value of the damages inflicted. (2) The law has not always had much success stopping computer crime. In 1990 there was a nationwide crackdown on illicit computer hackers, with arrests, criminal charges, one dramatic show-trial, several guilty pleas, and huge confiscations of data and equipment all over the USA. The Hacker Crackdown of 1990 was larger, better organized, more deliberate, and more resolute than any previous efforts. The U.S. Secret Service, private telephone security, and state and local law enforcement groups across the country all joined forces in a determined attempt to break the back of America's electronic underground. It was a fascinating effort, with very mixed results. In 1982, William Gibson coined the term "Cyberspace". Cyberspace is defined as "the 'place' where a telephone conversation appears to occur. Not inside your actual phone, the plastic device on your desk... The place between the phones. The indefinite place out there." (1, p. 1) The words "community" and "communication" share the same root. Wherever one allows many people to communicate, one creates a community. "Cyberspace" is as much of a community as any neighborhood or special interest group. People will fight more to defend the communities that they have built then they would fight to protect themselves. This two-sided fight truly began when the AT&T telephone network crashed on January 15, 1990. The crash occurred due to a small bug in AT&T's own software. It began with a single switching station in Manhattan, New York, but within ten minutes the domino effect had brought down over half of AT&T's network. The rest was overloaded, trying to compensate for the overflow. This crash represented a major corporate embarrassment. Sixty thousand people lost their telephone service completely. During the nine hours of effort that it took to restore service, some seventy million telephone calls went uncompleted. Because of the date of the crash, Martin Luther King Day (the most politically touchy holiday), and the absence of a physical cause of the destruction, AT&T did not find it difficult to rouse suspicion that the network had not crashed by itself- that it had been crashed, intentionally. By people the media has called hackers. Hackers define themselves as people who explore technology. If that technology takes them outside of the boundaries of the law, they will do very little about it. True hackers follow a "hacker's ethic", and never damage systems or leave electronic "footprints" where they have been. Crackers are hackers who use their skills to damage other people's systems or for personal gain. These people, mistakenly referred to as hackers by the media, have been sensationalized in recent years. Software pirates, or warez dealers, are people who traffic in pirated software (software that is illegally copied and distributed). These people are usually looked down on by the more technically sophisticated hackers and crackers. Another group of law-breakers that merit mentioning are the phreakers. Telephone phreaks are people that experiment with the telephone network. Their main goal is usually to receive free telephone service, through the use of such devices as homemade telephone boxes. They are often much more extroverted than their computer equivalents. Phreaks have been known to create world-wide conference calls that run for hours (on someone else's bill, of course). When someone has to drop out, they call up another phreak to join in. Hackers come from a wide variety of odd subcultures, with a variety of languages, motives and values. The most sensationalized of these is the "cyberpunk" group. The cyberpunk FAQ (Frequently Asked Questions list) states: 2. What is cyberpunk, the subculture? Spurred on by cyberpunk literature, in the mid-1980's certain groups of people started referring to themselves as cyberpunk, because they correctly noticed the seeds of the fictional "techno-system" in Western society today, and because they identified with the marginalized characters in cyberpunk stories. Within the last few years, the mass media has caught on to this, spontaneously dubbing certain people and groups "cyberpunk". Specific subgroups which are identified with cyberpunk are: Hackers, Crackers, and Phreaks: "Hackers" are the "wizards" of the computer community; people with a deep understanding of how their computers work, and can do things with them that seem "magical". "Crackers" are the real-world analogues of the "console cowboys" of cyberpunk fiction; they break in to other people's computer systems, without their permission, for illicit gain or simply for the pleasure of exercising their skill. "Phreaks" are those who do a similar thing with the telephone system, coming up with ways to circumvent phone companies' calling charges and doing clever things with the phone network. All three groups are using emerging computer and telecommunications technology to satisfy their individualist goals. Cypherpunks: These people think a good way to bollix "The System" is through cryptography and cryptosystems. They believe widespread use of extremely hard-to-break coding schemes will create "regions of privacy" that "The System" cannot invade. (3) This simply serves to show that computer hackers are not only teenage boys with social problems who sit at home with their computers; they can be anyone. The crash of AT&T's network and their desire to blame it on people other than themselves brought the political impetus for a new attack on the electronic underground. This attack took the form of Operation Sundevil. "Operation Sundevil" was a crackdown on those traditional scourges of the digital underground: credit card theft and telephone code abuse. The targets of these raids were computer bulletin board systems. Boards can be powerful aids to organized fraud. Underground boards carry lively, extensive, detailed, and often quite flagrant discussions of lawbreaking techniques and illegal activities. Discussing crime in the abstract, or discussing the particulars of criminal cases, is not illegal, but there are stern state and federal laws against conspiring in groups in order to commit crimes. It was these laws that were used to seize 25 of the "worst" offenders, chosen from a list of over 215 underground BBSs that the Secret Service had fingered for "carding" traffic. The Secret Service was not interested in arresting criminals. They sought to seize computer equipment, not computer criminals. Only four people were arrested during the course of Operation Sundevil; one man in Chicago, one man in New York, a nineteen-year-old female phreak in Pennsylvania, and a minor in California. This was a politically motivated attack designed to show the public that the government was capable of stopping this fraud, and to show the denizens of the electronic underground that the government could penetrate into the very heart of their society and destroy routes of communication, as well as bring down the legendary BBS operators. This is not an uncommon message for law-enforcement officials to send to criminals. Only the territory was new. Another message of Sundevil was to the employees of the Secret Service themselves; proof that such a large-scale operation could be planned and accomplished successfully. The final purpose of Sundevil was as a message from the Secret Service to their long-time rivals the Federal Bureau of Investigation. Congress had not clearly stated which agency was responsible for computer crime. Later, they gave the Secret Service jurisdiction over any computers belonging to the government or responsible for the transfer of money. Although the secret service can't directly involve themselves in anything outside of this jurisdiction, they are often called on by local police for advice. Hackers are unlike any other group of criminals, in that they are constantly in contact with one another. There are two national conventions per year, and monthly meetings within each state. This has forced people to pose the question of whether hacking is really a crime at all. After seeing such movies at "The Net" or "Hackers", people have begun to wonder how vulnerable they individually are to technological crime. Cellular phone conversations can be easily overheard with modified scanners, as can conversations on cordless phones. Any valuable media involving numbers is particularly vulnerable. A common practice among hackers is "trashing". Not, as one might think, damaging public property, but actually going through a public area and methodically searching the trash for any useful information. Public areas that are especially vulnerable are ATM chambers and areas where people posses credit cards printouts or telephone bills. This leads to another part of hacking that has very little to do with the technical details of computers or telephone systems. It is referred to by those who practice it as "social engineering". With the information found on someone's phone bill (account or phonecard number), an enterprising phreak can call up and impersonate an employee of the telephone company- obtaining useable codes without any knowledge of the system whatsoever. Similar stunts are often performed with ATM cards and pin numbers. The resulting codes are either kept or used by whomever obtained them, traded or sold over Bulletin Board Systems or the Internet, or posted for anyone interested to find. With the increasing movement of money from the physical to the electronic, stricter measures are being taken against electronic fraud, although this can backfire. In several instances, banks have covered up intrusions to prevent their customers from losing their trust in the security of the system. The truth has only come out long after the danger was passed. Electronic security is becoming a way of life for many people. As with the first cellular telephone movements, this one has begun with the legitimately wealthy and the criminals. The most common security package is PGP, or Pretty Good Privacy. PGP uses RSA public-key encryption algorithms to provide military-level encryption to anyone who seeks to download the package from the Internet. The availability of this free package on the Internet caused an uproar and brought about the arrest of the author, Phil Zimmerman. The United States government lists RSA encryption along with weapons of which the exportation is illegal. The Zimmerman case has not yet been resolved. The United States government has begun to take a large interest in the Internet and private Bulletin Board Systems. They have recently passed the Communications Decency Act, which made it illegal to transmit through the Internet or phone lines in electronic form any "obscene or inappropriate" pictures or information. This Act effectively restricted the information on the Internet to that appropriate in PG-13 movies. As of June 12, 1996, the censorship section of the Communications Decency Act was overturned by a three-judge panel of the federal court of appeals, who stated that it violates Internet user's first amendment rights, and that it is the responsibility of the parents to censor their children's access to information, not the government's. The court of appeals, in effect, granted the Internet the protections previously granted to newspapers, one of the highest standards of freedom insured by our Constitution. The Clinton administration has vowed to appeal this decision through the Supreme Court. Technological crime is harder to prosecute than any other, because the police are rarely as technologically advanced as the people they are attempting to catch. This situation was illustrated by the recent capture of Kevin Mitnick. Mitnick had eluded police for years. After he broke into security expert Tsumona's computer, Tsumona took over the investigation and captured Mitnick in a matter of months. It will be fascinating to see, as technology continues to transform society, the way that technological criminals, usually highly intelligent and dangerous, will transform the boundaries of crime. As interesting to see will be how the government will fight on this new battle ground against the new types of crime, while preserving the rights and freedom of the American people. f:\12000 essays\technology & computers (295)\Computer Crime1.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I Ever since I got my first computer. I have enjoyed working on them. I have learned a tremendous amount of trouble shooting. With my recent computer I have come across computer crime. I got interested in hacking, prhreaking, and salami slicing. But before I go to far, I need to learn more about it like the consequences? One question in mind is what crimes are their and what kind of things you can do with them? I would like to find out why people do thesis things? I would also like to learn the laws against all computer crime? II Today's computer society has brought a new form of crime. There are those "hackers" who break their way into computers to learn the system or get information. I found out in the book Computer Crime written by Judson, Karen: That "Salami Slicers" steal small amounts of money from many bank customers this adding up to a great deal of money. I also read about phone phreaks more known as "Phreakers." They steal long distance phone services. Phreakers commit many other crimes against phone companies. In the book Computer Crime it states, most people commit thesis crimes, because they where carious and wanted to explore the system. All they want to do is exploit systems not destroy it. It is purely intellectual. I know one reason is that is can be very rewarding. Hackers are drawn to computers for the aninymity they allow. They feel powerful and can do anything. Hackers can be there own person out side the real world. I found out Arizona was the first state to pass a law against computer crime, in 1979. In 1980 the U.S. copyright act was amended to include soft ware. I found out that in 1986 a computer farad abuse act was passed. This act was made to cover over any crime or computer scheme that was missed with any former laws. Violations to any of thesis laws are a maxim of five years in prison and a $250,000 fine. III With my computer I can do lots of thesis things but choose not to. Because I know that if you know computers you can do much more like carious wise. If you know computers you set for the future. I'm not saying I don't have fun with my computer I like causing a little trouble every now and then. Well I piety much covered the motives and intentions behind the most common computer crimes. I explained the laws and punishments for committing thesis crimes. I hope I cleared things up for you elutriate computer people, and gave you a more understanding of the things that can be done. As you have red you can see that computers can and are more dangerous than guns. f:\12000 essays\technology & computers (295)\Computer Crime4.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ It's the weekend, you have nothing to do so you decide to play around on your computer. You turn it on and then start up, you start calling people with your modem, connecting to another world, with people just like you at a button press away. This is all fine but what happens when you start getting into other peoples computer files. Then it becomes a crime, but what is a computer crime really, obviously it involves the use of a computer but what are these crimes. Well they are: Hacking, Phreaking, & Software Piracy. To begin I will start with Hacking, what is hacking. Hacking is basically using your computer to "Hack" your way into another. They use programs called scanners which randomly dials numbers any generating tones or carriers are recorded. These numbers are looked at by hackers and then used again, when the hacker calls up the number and gets on he's presented with a logon prompt, this is where the hacking really begins, the hacker tries to bypass this anyway he knows how to and tries to gain access to the system. Why do they do it, well lets go to a book and see "Avid young computer hackers in their preteens and teens are frequently involved in computer crimes that take the form of trespassing, invasion of privacy, or vandalism. Quite often they are mearly out for a fun and games evening, and they get entangled in the illegal use of their machines without realizing the full import of what they are doing", I have a hard time believing that so lets see what a "hacker" has to say about what he does "Just as they were enthraled with their pursuit of information, so are we. The thrill of the hack is not in breaking the law, it's in the pursuit and capture of knowledge.", as you can see the "hacker" doesn't go out to do destroy things although some do. It's in the pursuit of knowledge. Of course this is still against the law. But where did all of this start, MIT is where hacking started the people there would learn and explore computer systems all around the world. In the views of professional hacking is like drugs or any other addictive substance, it's an addiction for the mind and once started it's difficult to stop. This could be true, as hackers know what they are doing is wrong and they know odds are they will be caught. But as I mentioned some hackers are just above average criminals, using there skills to break in banks and other places where they can get money, or where they can destroy information. What a hacker does at a bank is take a few cents or even a few fractions of a cents from many different accounts this may seem like nothing but when all compiled can be alot. A stick up robber averages about $8,000 each "job", and he has to put his life and personal freedom on the line to do it while the computer hacker in the comfort of his own living room averages $500,000 a "job". As for people destroying information, this is for taking some one down, destruction of data could end a business which for some is very attractive. It can cost a company thousands of dollars to restore the damage done. Now that you have an understanding of what a "hacker" is, it time to move on to someone closely associates with a hacker. This is a Phreak, but what is that. For the answer we turn to the what is known as the "Official" Phreakers Manual "Phreak [fr'eek] 1. The action of using mischievous and mostly illegal ways in order to not pay for some sort of telecommunications bill, order, transfer, or other service. It often involves usage of highly illegal boxes and machines in order to defeat the security that is set up to avoid this sort of happening. [fr'eaking] v. 2. A person who uses the above methods of destruction and chaos in order to make a better life for all. A true phreaker will not go against his fellows or narc on people who have ragged on him or do anything termed to be dishonourable to phreaks. [fr'eek] n. 3. A certain code or dialup useful in the action of being a phreak. (Example: "I hacked a new metro phreak last night.")" The latter 2 ideas of what a phreak is, is rather weird. A Phreak like the hacker likes to explore and experiment, however his choice of exploring is not other computer but the phone system as a whole. Phreaks explore the phone system finding many different ways to do things, most often make free calls. Why do they do this, " A hacker and phreaker will have need to use telephone systems much more than an average individual, therefore, methods which can be used to avoid toll charges are in order. ". A phreak has two basic ways of making free calls, he can call up codes or PBXs on his phone and then enter a code and make his call or he can use Electronic Toll Fraud Devices. Codes are rather easy to get the phreak will scan for them, but unlike a hacker will only save the tone(s) number instead of the carrier(s). Then he will attempt to hack the code to use it, these codes range from numbers 0 - 9 and can be any length, although most are not more than 10. Electronic Toll Fraud Devices are known as Boxes in the underground. Most are the size of a pack of smokes, or than can be smaller or bigger. I will not go too deep. They are electronic devices than do various things, such as make outgoing calls free, make incoming calls free, simulate coins dropping in a phone, etc. People who "Phreak" are caught alot these days thanks to the new technology. Software Piracy is the most common computer crime, it is the illegal coping of software. "People wouldn't think of shoplifting software from a retail store, but don't think twice about going home and making several illegal copies of the same software." and this is true because I myself am guilty of this. The major problem is not people going out and buying the software then making copies for everyone, it's the Bulletin Boards that cater to pirating software, that really cause the problem. On anyone one of these boards one can find an upwards of 300 - 1000+ of pirated software open for anyone to take. This is a problem and nothing can really be done about it. Few arrests are made in this area of computer crime. I will now devote a brief section to the above mentioned BBS' , most are legal and do nothing wrong. However there are many more that do accept pirated software, pornographic pictures, animations , and texts. As well as a trading area for phone codes, other BBS', Credit Card numbers, etc. This is where a majority of Hackers and Phreaks come, as well as those who continue to pirate software come to meet and share stories. In this is a new world, where you can do anything, there are groups that get, crack, and courier software all over the world some of them are called: INC: International Network Of Crackers, THG: The Humble Guys, TDT: The Dream Team. As well a number of other groups have followed suit such as Phalcon/SKISM (Smart Kids Into Sick Methods), NuKE, and YAM (Youngsters Against McAfee) these are virus groups who write and courier their work anywhere they can, they just send it somewhere, where anyone can take it and use it in any manner they wish, such as getting even with someone. All of these activities are illegal but nothing can be done, the people running these boards know what they are doing. As it stands right now, the BBS world is in two parts Pirating and the Underground, which consists of Hackers/Phreaks/Anarchists/Carders(Credit Card Fraud)/Virus programmers. All have different boards and offer a variety of information on virtually any subject. Well from all of this reading you just did you should have a fairly good idea of what computer crime is. I didn't mention it in the sections but the police, phone companies are arresting and stopping alot of things every day. With the new technology today it is easier to catch these criminals then it was before. With the exception of the BBS' the police have made some major blows busting a few BBS', arresting hackers and phreaks. All of which were very looked up to for knowledge in their areas of specialty. If I had more time I could go into these arrests but I must finish by saying that these are real crimes and the sentences are getting harsher, with alot of the older people getting out the newer people are getting arrested and being made examples of. This will deter alot of would-be computer criminal away. f:\12000 essays\technology & computers (295)\Computer crime5.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ In the world of computers, computer fraud and computer crime are very prevalent issues facing every computer user. This ranges from system administrators to personal computer users who do work in the office or at home. Computers without any means of security are vulnerable to attacks from viruses, worms, and illegal computer hackers. If the proper steps are not taken, safe computing may become a thing of the past. Many security measures are being implemented to protect against illegalities. Companies are becoming more aware and threatened by the fact that their computers are prone to attack. Virus scanners are becoming necessities on all machines. Installing and monitoring these virus scanners takes many man hours and a lot of money for site licenses. Many server programs are coming equipped with a program called "netlog." This is a program that monitors the computer use of the employees in a company on the network. The program monitors memory and file usage. A qualified system administrator should be able to tell by the amounts of memory being used and the file usage if something is going on that should not be. If a virus is found, system administrators can pinpoint the user who put the virus into the network and investigate whether or not there was any malice intended. One computer application that is becoming more widely used and, therefore, more widely abused, is the use of electronic mail or email. In the present day, illegal hackers can read email going through a server fairly easily. Email consists of not only personal transactions, but business and financial transactions. There are not many encryption procedures out for email yet. As Gates describes, soon email encryption will become a regular addition to email just as a hard disk drive has become a regular addition to a computer (Gates p.97-98). Encrypting email can be done with two prime numbers used as keys. The public key will be listed on the Internet or in an email message. The second key will be private, which only the user will have. The sender will encrypt the message with the public key, send it to the recipient, who will then decipher it again with his or her private key. This method is not foolproof, but it is not easy to unlock either. The numbers being used will probably be over 60 digits in length (Gates p.98-99). The Internet also poses more problems to users. This problem faces the home user more than the business user. When a person logs onto the Internet, he or she may download a file corrupted with a virus. When he or she executes that program, the virus is released into the system. When a person uses the World Wide Web(WWW), he or she is downloading files into his or her Internet browser without even knowing it. Whenever a web page is visited, an image of that page is downloaded and stored in the cache of the browser. This image is used for faster retrieval of that specific web page. Instead of having to constantly download a page, the browser automatically reverts to the cache to open the image of that page. Most people do not know about this, but this is an example of how to get a virus in a machine without even knowing it. Every time a person accesses the Internet, he or she is not only accessing the host computer, but the many computers that connect the host and the user. When a person transmits credit card information, it goes over many computers before it reaches its destination. An illegal hacker can set up one of the connecting computers to copy the credit card information as it passes through the computer. This is how credit card fraud is committed with the help of the Internet. What companies such as Maxis and Sierra are doing are making secure sites. These sites have the capabilities to receive credit card information securely. This means the consumer can purchase goods by credit card over the Internet without worrying that the credit card number will be seen by unauthorized people. System administrators have three major weapons against computer crime. The first defense against computer crime is system security. This is the many layers systems have against attacks. When data comes into a system, it is scanned for viruses and safety. Whenever it passes one of these security layers, it is scanned again. The second resistance against viruses and corruption is computer law. This defines what is illegal in the computer world. In the early 1980's, prosecutors had problems trying suspect in computer crimes because there was no definition of illegal activity. The third defense is the teaching of computer ethics. This will hopefully defer people from becoming illegal hackers in the first place (Bitter p. 433). There are other ways companies can protect against computer fraud than in the computer and system itself. One way to curtail computer fraud is in the interview process and training procedures. If it is made clear to the new employee that honesty is valued in the company, the employee might think twice about committing a crime against the company. Background checks and fingerprinting are also good ways to protect against computer fraud. Computer crime prevention has become a major issue in the computer world. The lack of knowledge of these crimes and how they are committed is a factor as to why computer crime is so prevalent. What must be realized is that the "weakest link in any system is the human" (Hafner and Markoff p. 61). With the knowledge and application of the preventative methods discussed, computer crime may actually become an issue of the past. Works Cited Bitter, Gary G., ed. The MacMillian Encyclopedia of Computers. MacMillian Publishing Company: New York, 1992. Gates, William. The Road Ahead. New York : Penguin Books, 1995. Hafner, Katie & John Markoff. Cyberpunk. New York : Simon and Schuster, 1991. Romney, Marshall. "Computer Fraud - What Can Be Done About It?" CPA Journal Vol. 65 (May 1995): p. 30-33. f:\12000 essays\technology & computers (295)\Computer crime6.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Crime: The Crime of the Future English II 6 April 1996 Computer Crimes Explosive growth in the computer industry over the last decade has made new technologies cheaper and simpler for the average person to own. As a result, computers play an intricate part in our daily lives. The areas in which computers affect life are infinite, ranging from entertainment to finances. If anything were to happen to these precious devices, the world would be chaotic. There is a type of person that thrives on chaos, that is the malevolent hacker. Some hackers act on revenge or just impersonal mischievousness. But whatever their motives, their deeds can be destructive to a person's computer. An attack by a hacker not only affects the victim, but others as well. One case involving a notorious hacker named Kevin Mitnick did just that. Mitnick is a very intelligent man. He is 31 and pending trial for computer fraud. When he was a teenager, he used his knowledge of computers to break into the North American Defense Command computer. Had he not been stopped, he could have caused some real national defense problems for the United States (Sussman 66). Other "small time" hackers affect people just as much by stealing or giving away copyrighted software, which causes the prices of software to increase, thus increasing the price the public must pay for the programs. Companies reason that if they have a program that can be copied onto a disc then they will lose a certain amount of their profit. People will copy it and give to friends or pass it around on the Internet. To compensate, they will raise the price of disc programs. CD Rom programs cost more to make but are about the same price as disc games. Companies don't loose money on them because it is difficult to copy a CD Rom and impossible to transmit over the Internet (Facts on File #28599 1). One company in particular, American On-line, has been hit hard by hackers. The feud started when a disgruntled ex-employee used his inside experience to help fellow hackers disrupt services offered by AOL (Alan 37). His advice became popular and he spawned a program called AOHell. This program, in turn, created many copycats. They all portray their creators as gangsters, and one of the creator's names is "Da Chronic." Many also feature short clips of rap music (Cook 36). These programs make it easy for people with a little hacker knowledge to disrupt AOL. These activities include gaining access to free accounts, gaining access to other people's credit card numbers, and destroying chat rooms. The following is an excerpt from a letter from the creator of AOHell to a user: What is AOHell? AOHell is an AOL for Windows add-on, which allows you to do many things. AOHell allows you to download for free, talk using other people's screen names, steal passwords and credit card information, and much more. AOHell is basically an anarchy program designed to help you, the user, and destroy AOL, the enemy: No matter what AOL says to you, nor what even Steve Case* himself may say about AOHell, don't be too quick to judge. America On-line may say anything to get you to stop using AOHell. They may say it's a virus, they may say it'll cancel your account, hell, they've even tried to suggest it may steal your password and send it to the author. None of this is true however. Free AOL does not interest me, as I have many ways to accomplish that. You should always keep that in mind when you hear such rumors. It's AOL and their sick pedophiles I'm against, not you, the user. You are the ones who are making it possible for me to achieve my goal, which is to make AOL a virtual Hell. Now stop reading, and go destroy a Mac room with the bot or something. :) (Cook 36) The quote above was in defence of AOHell which has received a lot of negative feedback. The loopholes for hackers and freeloaders may be closing, however. America On-line is reluctant to discuss specifics of its counterattack for fear of giving miscreants warning. However, many software trading rooms are being shut down almost as soon as they are formed. Others are often visited by 'narcs' posing as traders. New accounts started with phony credit cards are being cut off more promptly, and other card- verification schemes are in place. AOL has now developed the ability to resurrect a screen name that had been deleted by the hackers, and is rumored to have call-tracing technologies in the works (Alan 37). Hacking is not just a problem in America. All across the world hackers plague anyone they can, and they're getting better at it. In Europe they're known as "Phreakers" (technologically sophisticated young computer hackers). These self-proclaimed Phreakers have made their presence felt all the way up the political ladder. They managed to steal personal expense accounts of the European Commission President Jacques. They revealed some embarrassing overspending (PC Weekly 12). Was this stealing justified? Was it done to protect the public from wasting their tax money? The European judicial system did not think so. The accused were sentenced to six months in prison (PC Weekly 12). This punishment might seem harsh, but not to Bill Clinton. He has appointed a task force to try to enforce laws on the Internet. The new laws would try to strengthen copyright laws by monitoring information being transferred and if a violation occurred, a $5,000 fine would be implemented (Facts On File #28599 1). Clinton thinks this will protect businesses as well as consumers by keeping copyrighted material at a reasonable price. The only exception would be that libraries would have the right to copy "for purposes of preservation" (Phelps 75). Some people view hackers as the "Robin Hoods" of the Internet. They wrestle with the heavyweight businesses to try to gain leverage for individuals. But in doing so they make businesses increase prices to pay for security. It is an ongoing cycle. Many anti-hacking groups think they are gaining some ground on hackers by making more sophisticated software. But like a virus that becomes immune too quickly, the hackers find another way. The loopholes of the hacker are infinite. Just as one cannot leave their shadow behind on a sunny day, the hacker will be around as long as there is something to hack. Works Cited Alan Robert, "AOL's Piracy Woes: Attack and Counterattack" Macworld 16 June 1995: 37-38 "Computers: On-line Copyright Protection Proposed" Facts on File World News Digest 14 September 1995 28599 "Data Busters" PC Weekly 8 August 1995: 12-14 Phelps, Alan Abstract "On-line Slime" PC Novice 1995 74-75 Pro Quest, DiscII Sussman, Vic: "Hacker Nabbed" Us News & World Report 27 Febuary 1995 66-67 Cook, William "Aol's battle with AOHell" Internet Underground 22 April 1995: 36-37 f:\12000 essays\technology & computers (295)\Computer Crimes 2.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ABSTRACT Computer crimes seem to be an increasing problem in today's society. The main aspect concerning these offenses is information gained or lost. As our government tries to take control of the information that travels through the digital world, and across networks such as the InterNet, they also seem to be taking away certain rights and privileges that come with these technological advancements. These services open a whole new doorway to communications as we know it. They offer freedom of expression, and at the same time, freedom of privacy in the highest possible form. Can the government reduce computer crimes, and still allow people the right to freedom of expression and privacy? INFORMATION CONTROL IN THE DIGITIZED WORLD In the past decade, computer technology has expanded at an incredibly fast rate, and the information stored on these computers has been increasing even faster. The amount of money, military intelligence, and personal information stored on computers has increased far beyond expectations. Governments, the military, and the economy could not operate without the use of computers. Banks transfer trillions of dollars every day over inter-linking networks, and more than one billion pieces of electronic mail are passed through the world's networks daily. It is the age of the computer network, the largest of which is known as the InterNet. A complex web of communications inter-linking millions of computers together -- and this number is at least doubling every year. The computer was originally designed as a scientific and mathematical tool, to aid in performing intense and precise calculations. However, from the large, sixty square foot ENIAC (Electronical Numerical Integrator and Calculator) of 1946, to the three square foot IBM PC of today, their uses have mutated and expanded far beyond this boundary. Their almost infinite capacity and lightning speed, which is increasing annually, and their low cost, which is decreasing annually, has allowed computers to stabilize at a more personal level, yet retain their position in mathematical and scientific research1 . They are now being used in almost every aspect of life, as we know it, today. The greatest effect of computers on life at this present time seems to be the InterNet. What we know now as the InterNet began in 1969 as a network then named ArpaNet. ArpaNet, under control by the pentagon's Defense Advanced Research Projects Agency, was first introduced as an answer to a problem concerning the government question of how they would communicate during war. They needed a network with no central authority, unlike those subsequent to this project. A main computer controlling the network would definitely be an immediate target for enemies. The first test node of ArpaNet was installed at UCLA in the Fall of 1969. By December of the same year, three more nodes were added, and within two years, there was a total of fifteen nodes within the system. However, by this time, something seemed to be changing concerning the information traveling across the nodes. By 1971, government employees began to obtain their own personal mail addresses, and the main traffic over the net shifted from scientific information to personal mail and gossip. Mailing lists were used to send mass quantities of mail to hundreds of people, and the first newsgroup was created for discussing views and opinions in the science fiction world. The networks decentralized structure made the addition of more machines, and the use of different types of machines very simple. As computer technology increased, interest in ArpaNet seemed only to expand. In 1977, a new method of transmission was put into effect, called TCP/IP. The transmission control protocol (TCP) would convert messages into smaller packets of information at their source, then reassemble them at their destination, while the InterNet protocol (IP) would control the addressing of these packets to assure their transmission to their correct destinations. This newer method of transmission was much more efficient then the previous network control protocol (NCP), and became very popular. Corporations such as IBM and DEC began to develop TCP/IP software for numerous different platforms, and the demand for such software grew rapidly. This availability of software allowed more corporations and businesses to join the network very easily, and by 1985, ArpaNet was only a tiny portion of the newly created InterNet. Other smaller networks are also very widely used today, such as FidoNet. These networks serve the same purpose as the InterNet, but are on a much smaller scale, as they have less efficient means of transferring message packets. They are more localized, in the sense that the information travels much more slowly when further distances are involved. However, the ease of access to these networks and various computers has allowed computer crimes to increase to a much higher scale. These computers and networks store and transfer one thing -- information. The problem occurs when we want to determine the value of such information. Information lacks physical properties, and this intangible aspect of data creates problems when developing laws to protect it. The structure of our current legal system has, to this point, been based on ascertainable limits. Physical properties have always been at its main core2 . In the past, this information, or data, has been 'converted' into tangible form to accommodate our system. A prime example is the patent, which is written out on paper. Today, however, it is becoming much more difficult to 'convert' this data into a physical form, as the quantity is increasing so rapidly, and this quantity of information is being stored in a virtual, digitized space3 . It is very important to realize and emphasize that computers and networks store and transfer only information, and that most all of this information can be altered, in some way, undetectably. For example, when a file is stored in the popular DOS environment (and also in environments such as Windows, OS/2, and in similar ways, UNIX), it is also stored with the date, time, size, and four attributes -- read-only, system, hidden, and archive. One may consider checking the date at which the document, or information stored on the computer, was saved to determine if it was modified. However, this is also digital information, and easily changed to whatever date or time the operator prefers. One may also consider the attributes stored with the file. If a file is flagged as 'read-only,' then perhaps it cannot be overwritten. This is surely the case -- however, this attribute is easily turned off and on, as it is also information in a digitized sense, and therefore very easily changed. This is the same case when a file is 'hidden'. It may very well be hidden to the novice user, but it is easily seen to anyone who has even a slight knowledge of the commands of the system. One may also consider moving this information to a floppy disk in order to preserve its originality; but we are once again giving it a physical aspect, which we earlier addressed as being a close to impossible task when involved with the amount of information involved in this area today. Digital information is infinitely mutable, and the information that protects this information is infinitely mutable4 . In order to understand how to control this information, we must first understand what information and it's value -- especially that of a digital nature -- is. One cannot specifically define information in a whole. In today's society, 'knowledge is power' seems to be a common phrase, and a quite true one. It would be even more true to say 'knowledge can be power.' It's how we use this knowledge that determines it's power. In the same sense, it is how we use and distribute this knowledge that determines it's value. Information can be used in so many ways that it is virtually impossible to value it. What information is of value to one person may be completely worthless to another. The availability of this knowledge also determines it's worth. If information is as free as air, it has virtually no worth5 . Therefore, it is also a privacy issue. We can now base the value of information on three things: it's availability, it's use, and it's user. In order to protect information in our current government, we must first value it. Those three aspects of information can be so differentiated, that this is close to impossible to do so. In addition to this, how do we determine who "owns" the information? Information itself is not a physical thing which only one person has in their possession at any time. If information is given away, it is still held by the giver, as well as the taker. It is impossible to determine exactly who has this information. If someone steals information, we cannot take it away from them -- it is intangible in almost every aspect. We must also understand the way in which our government, and most governments, create laws and attempt to desist illegal actions. As stated earlier, the American government, and many other governments, are based on a physical center, which I exemplified with the case of the US patent. When our government creates laws, the subjects of the laws are given a definable, ascertainable limit. When someone commits grand theft auto, breaking and entering, or murder, we understand what has occurred and have definite ways to prove what has occurred, where and when it has occurred, how it has occurred, and, if applicable, what has been harmed and what is its value. However, when we look at computer crimes, such as unauthorized access, we cannot be as clear on these aspects, and we do not have definite ways to prove the crime, or who committed it, nor do we have a way in which to define the value of anything damaged, if it had even been damaged. It is hard to convict a person when all they did was slow down a computer network for a few days, or look at a credit profile on John Doe. Problems also occur because people, including those in the legal profession as well as jurors, do not always understand technology. They do not always understand how mutable digital information can be, and how easily accessible and distributed it can be. When a jury does not understand, one cannot truly be declared guilty "beyond a reasonable doubt". "Technically, I didn't commit a crime. All I did was destroy data. I didn't steal anything.6 " How can this be argued? Crimes committed in the computer world do not exactly adhere with current laws that address physical crimes. We cannot adapt current laws to those involving information crimes, and trying to do that will cause too many problems and confusions because of the variety, extent, and value of information as a whole. However, this is exactly what the government is trying to do. It must also be considered that this information is not strictly a US problem, nor is it more geared towards the US. Although started by the United States government, the InterNet has grown world wide, reaching over seventy countries. Since the InterNet has such a decentralized structure, one cannot say that the US is "in charge" of the network. The problem is, the US government does not see this themselves. The United States government wants to censor the information traveling across the InterNet and other telecommunication services, but this cannot be the case any longer because of this situation. We cannot expect other countries to adhere to the laws of the United States, just as most Americans would not expect to have to agree to laws set by other countries. Therefore, it could be easily said that the government would be invading privacy if they were to attempt to censor the information which travels these networks. Individual computers, are, of course, an individuals property, and it would, without a doubt, be an invasion of privacy if the government wanted to, at any given time, search your hard drive without just cause. I feel that the government wants too much power this time. It would seem that they want to have access to and control all digital information in America for their own benefit. The US government created an encryption device called the Clipper chip, which was to insure digital privacy among it's users. However, our government seems to only define privacy to an extent. They had also planned to keep, in their possession, a duplicate of each chip. So much for total privacy. The government seems to be on a quest for total control over it's citizens, and the citizens of the world. This may seem extreme at the present time, but our current legal system does not allow for the undefinable limits that information control presents, especially on a world wide basis. If the government tries to gain too much control, it could very well lead to it's failure. Control -- the control we need -- is not a legal problem at all. It is a social, moral, and technological problem7 . What is needed is a type of 'information ethics'. A set of morals and customs must be slowly adapted, and not pounded into the digital world by the government. Virtual laws must be formed by a virtual government. Information cannot be controlled by our government in it's current form. In order to control information, the government would have to induce a drastic change. The first amendment, in reality, is the foundation of the rights of the citizens of this country. This amendment, in it's most basic form, guarantees our right to inform and be informed. The government can not and will not be able to control digital information as a whole, or govern the right to this information without sacrificing the keystone of our nation and of our rights as Americans. 1 We see about 50-70% more computing power per year, and hardware prices drop about 25-50% per year. Since 1978, raw computing power has increased by over 500 times. "80x86 Evolution," Byte, June 1994, pp. 19. 2 Curtis E.A. Karnow, Recombinant Culture: Crime In The Digital Network. (Speech, Defcon II, Los Vegas), 1994. 3 S. Zuboff, In the Age of the Smart Machine, New York; 1992.Michael Gemignani, Viruses And Criminal Law. Reprinted in Lance Hoffman, Rogue Programs: Viruses, Worms and Trojan Horses, New York, 1990.4 Lauren Wiener, Digital Woes, 1993.5 John Perry Barlow, "The Economy of Ideas", Wired, March 1994.6 Martin Sprouse, "Sabotage in the American Workplace: Anecdotes of Dissatisfaction, Mischief, and Revenge", New York; 1992. (Bank of America Employee who planted a logic bomb in the company computer system). 7 Curtis E.A. Karnow, Recombinant Culture: Crime In The Digital Network. (Speech, Defcon II, Los Vegas), 1994. ------------------ Works Cited Addison-Wesley, Bernard. How the Internet Came to Be. New York: Vinton Cerf, 1993. Communications Decency Act. Enacted by the U.S. Congress on February 1, 1996. Computer Fraud and Abuse Statute. Section 1030: Fraud and related activity in connection with computers. Denning, Dorothy. "Concerning Hackers Who Break into Computer Systems". Speech presented at the 13th National Computer Security Conference, Washington, DC, 1990. Gates, Bill. The Road Ahead. New York: Penguin Books USA, inc, 1995. The Gatsby. "A Hackers Guide to the Internet". Phrack. Issue 33, File 7; 15 September 1991. Icove, David, Karl Seger, and William VonStorch. Fighting Computer Crime. USA: O'Reilly Books, 1996. Time Life Books. Revolution in Science. Virginia: Time Life Books, inc., 1987. Wallich, Paul. "A Rouge's Routing." Scientific American. May 1995, pp. 31. f:\12000 essays\technology & computers (295)\Computer Crimes Speech.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer crimes are on the rise 1 in 10 Americans experience some form of a malicious attack on their computer system. If you pay attention to the rest of this speech you will understand how a Hackers mind works and how to defend yourself from them. In this speech I will tell you why and how people break into computers, what sorts of trouble they cause, and what kind of punishment lie ahead for them if caught. Hackers and Crackers break into computer systems for any of a wide variety of reasons. Many groups break into computers for capital gain while still others do it as a means to pass time at work or at school. For most it's a thrill to figure out how to break into a computer. Most people never have any intention of causing harm. I believe that for the vast majority of people it's merely the thrill of the "hunt" at pushes them to such great lengths. Many employees that work in large corporations feel that they don't get paid as much as they should. Therefore if they have high security clearance they are able to capitalize from that by selling the data they have access to on the black-market. Whether it be Ford Motor companies plan for the 1999 F-150 or spec sheets for the military's new bomber it happens everyday. Too by left is a drawing that illustrates the method that most Hackers use to take over your computer. Ever since the dial-up connection was invented anyone with a modem had the ability to wreck any one of thousands of computers. One of the most talked about forms of computer crime is computer viruses. A computer virus is a small but highly destructive program written by an unscrupulous computer Hacker. Back in 1984 a 17 year old computer Hacker single handedly brought down four hundred thousand computers in a matter of hours. Too my left is a graph depicting the # of computer crimes comited from 1988 till now. Some Hackers create a program called a worm. A worm is a piece of malicious software and is part of the virus family. People write worms to transfer money from bank accounts into their own personal checking account. Another way that Hackers cause trouble is by altering the telephone switching networks at MCI, AT&T, and Sprint. By doing this they are able to listen to any conversation they choose. Often-times they will listen in on the Police and FBI communicating with each-other. This allows them to move to a new location before they are found. Some Hackers use their knowledge of the telephone system to turn their enemies home telephone into a virtual pay-phone that asks for quarters whenever you take the phone off the hook. A person to commits a computer crime in caught will very likely face a substantial punishment. Often these types of criminals are never caught unless they really screw up. The most wanted Hacker Kevin Mitinick was tracked down and arrested after he broke into a computer that belonged to a Japanese security professional. After this man noticed that someone had gotten into his computer he dedicated the rest of his life to tracking down this one man. Kevin was able to say one step ahead of police for some time but the fatal mistake that he made was leaving a voice-mail message on a computer bragging about the fact that he thought he was unstoppable. When he was arrested he faced a 250,000 dollar fine, 900 hours community service, and a 10 year jail sentence. Many schools and small businesses still don't have a clue about how to deal with computer crimes and the like whenever they happen to strike. In conclusion hopefully you now know a little more about computer crimes and the people who commit them. Although most computer crimes are never accounted for the ones that are, are almost always prosecuted to the fullest extent under the law. f:\12000 essays\technology & computers (295)\computer crimes.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ THESIS: Laws must be passed to address the increase in the number and types of computer crimes. Over the last twenty years, a technological revolution has occurred as computers are now an essential element of today's society. Large computers are used to track reservations for the airline industry, process billions of dollars for banks, manufacture products for industry, and conduct major transactions for businesses because more and more people now have computers at home and at the office. People commit computer crimes because of society's declining ethical standards more than any economic need. According to experts, gender is the only bias. The profile of today's non-professional thieves crosses all races, age groups and economic strata. Computer criminals tend to be relatively honest and in a position of trust: few would do anything to harm another human, and most do not consider their crime to be truly dishonest. Most are males: women have tended to be accomplices, though of late they are becoming more aggressive. Computer Criminals tend to usually be "between the ages of 14-30, they are usually bright, eager, highly motivated, adventuresome, and willing to accept technical challenges."(Shannon, 16:2) "It is tempting to liken computer criminals to other criminals, ascribing characteristics somehow different from 'normal' individuals, but that is not the case."(Sharp, 18:3) It is believed that the computer criminal "often marches to the same drum as the potential victim but follows and unanticipated path."(Blumenthal, 1:2) There is no actual profile of a computer criminal because they range from young teens to elders, from black to white, from short to tall. Definitions of computer crime has changed over the years as the users and misusers of computers have expanded into new areas. "When computers were first introduced into businesses, computer crime was defined simply as a form of white-collar crime committed inside a computer system."(2600:Summer 92,p.13) Some new terms have been added to the computer criminal vocabulary. "Trojan Horse is a hidden code put into a computer program. Logic bombs are implanted so that the perpetrator doesn't have to physically present himself or herself." (Phrack 12,p.43) Another form of a hidden code is "salamis." It came from the big salami loaves sold in delis years ago. Often people would take small portions of bites that were taken out of them and then they were secretly returned to the shelves in the hopes that no one would notice them missing.(Phrack 12,p.44) Congress has been reacting to the outbreak of computer crimes. "The U.S. House of Judiciary Committee approved a bipartisan computer crime bill that was expanded to make it a federal crime to hack into credit and other data bases protected by federal privacy statutes."(Markoff, B 13:1) This bill is generally creating several categories of federal misdemeanor felonies for unauthorized access to computers to obtain money, goods or services or classified information. This also applies to computers used by the federal government or used in interstate of foreign commerce which would cover any system accessed by interstate telecommunication systems. "Computer crime often requires more sophistications than people realize it."(Sullivan, 40:4) Many U.S. businesses have ended up in bankruptcy court unaware that they have been victimized by disgruntled employees. American businesses wishes that the computer security nightmare would vanish like a fairy tale. Information processing has grown into a gigantic industry. "It accounted for $33 billion in services in 1983, and in 1988 it was accounted to be $88 billion." (Blumenthal, B 1:2) All this information is vulnerable to greedy employees, nosy-teenagers and general carelessness, yet no one knows whether the sea of computer crimes is "only as big as the Gulf of Mexico or as huge as the North Atlantic." (Blumenthal,B 1:2) Vulnerability is likely to increase in the future. And by the turn of the century, "nearly all of the software to run computers will be bought from vendors rather than developed in houses, standardized software will make theft easier." (Carley, A 1:1) A two-year secret service investigation code-named Operation Sun-Devil, targeted companies all over the United States and led to numerous seizures. Critics of Operation Sun-Devil claim that the Secret Service and the FBI, which have almost a similar operation, have conducted unreasonable search and seizures, they disrupted the lives and livelihoods of many people, and generally conducted themselves in an unconstitutional manner. "My whole life changed because of that operation. They charged me and I had to take them to court. I have to thank 2600 and Emmanuel Goldstein for publishing my story. I owe a lot to the fellow hackers and fellow hackers and the Electronic Frontier Foundation for coming up with the blunt of the legal fees so we could fight for our rights." (Interview with Steve Jackson, fellow hacker, who was charged in operation Sun Devil) The case of Steve Jackson Games vs. Secret Service has yet to come to a verdict yet but should very soon. The secret service seized all of Steve Jackson's computer materials which he made a living on. They charged that he made games that published information on how to commit computer crimes. He was being charged with running a underground hack system. "I told them it was only a game and that I was angry and that was the way that I tell a story. I never thought Hacker [Steve Jackson's game] would cause such a problem. My biggest problem was that they seized the BBS (Bulletin Board System) and because of that I had to make drastic cuts, so we laid of eight people out of 18. If the Secret Service had just come with a subpoena we could have showed or copied every file in the building for them."(Steve Jackson Interview) Computer professionals are grappling not only with issues of free speech and civil liberties, but also with how to educate the public and the media to the difference between on-line computer experimenters. They also point out that, while the computer networks and the results are a new kind of crime, they are protected by the same laws and freedom of any real world domain. "A 14-year old boy connects his home computer to a television line, and taps into the computer at his neighborhood bank and regularly transfers money into his personnel account."(2600:Spring 93,p.19) On paper and on screens a popular new mythology is growing quickly in which computer criminals are the 'Butch Cassidys' of the electronic age. "These true tales of computer capers are far from being futuristic fantasies."(2600:Spring 93:p.19) They are inspired by scores of real life cases. Computer crimes are not just crimes against the computer, but it is also against the theft of money, information, software, benefits and welfare and many more. "With the average damage from a computer crime amounting to about $.5 million, sophisticated computer crimes can rock the industry."(Phrack 25,p.6) Computer crimes can take on many forms. Swindling or stealing of money is one of the most common computer crime. An example of this kind of crime is the Well Fargo Bank that discovered an employee was using the banks computer to embezzle $21.3 million, it is the largest U.S. electronic bank fraud on record. (Phrack 23,p.46) Credit Card scams are also a type of computer crime. This is one that fears many people and for good reasons. A fellow computer hacker that goes by the handle of Raven is someone who uses his computer to access credit data bases. In a talk that I had with him he tried to explain what he did and how he did it. He is a very intelligent person because he gained illegal access to a credit data base and obtained the credit history of local residents. He then allegedly uses the residents names and credit information to apply for 24 Mastercards and Visa cards. He used the cards to issue himself at least 40,000 in cash from a number of automatic teller machines. He was caught once but was only withdrawing $200 and in was a minor larceny and they couldn't prove that he was the one who did the other ones so he was put on probation. "I was 17 and I needed money and the people in the underground taught me many things. I would not go back and not do what I did but I would try not to get caught next time. I am the leader of HTH (High Tech Hoods) and we are currently devising other ways to make money. If it weren't for my computer my life would be nothing like it is today."(Interview w/Raven) "Finally, one of the thefts involving the computer is the theft of computer time. Most of us don't realize this as a crime, but the congress consider this as a crime."(Ball,V85) Everyday people are urged to use the computer but sometimes the use becomes excessive or improper or both. For example, at most colleges computer time is thought of as free-good students and faculty often computerizes mailing lists for their churches or fraternity organizations which might be written off as good public relations. But, use of the computers for private consulting projects without payment of the university is clearly improper. In business it is the similar. Management often looks the other way when employees play computer games or generate a Snoopy calendar. But, if this becomes excessive the employees is stealing work time. And computers can only process only so many tasks at once. Although considered less severe than other computer crimes such activities can represent a major business loss. "While most attention is currently being given to the criminal aspects of computer abuses, it is likely that civil action will have an equally important effect on long term security problems."(Alexander, V119) The issue of computer crimes draw attention to the civil or liability aspects in computing environments. In the future there may tend to be more individual and class action suits. CONCLUSION Computer crimes are fast and growing because the evolution of technology is fast, but the evolution of law is slow. While a variety of states have passed legislation relating to computer crime, the situation is a national problem that requires a national solution. Controls can be instituted within industries to prevent such crimes. Protection measures such as hardware identification, access controls software and disconnecting critical bank applications should be devised. However, computers don't commit crimes; people do. The perpetrator's best advantage is ignorance on the part of those protecting the system. Proper internal controls reduce the opportunity for fraud. BIBLIOGRAPHY Alexander, Charles, "Crackdown on Computer Capers," Time, Feb. 8, 1982, V119. Ball, Leslie D., "Computer Crime," Technology Review, April 1982, V85. Blumenthal,R. "Going Undercover in the Computer Underworld". New York Times, Jan. 26, 1993, B, 1:2. Carley, W. "As Computers Flip, People Lose Grip in Saga of Sabatoge at Printing Firm". Wall Street Journal, Aug. 27, 1992, A, 1:1. Carley, W. "In-House Hackers: Rigging Computers for Fraud or Malice Is Often an Inside Job". Wall Street Journal, Aug 27, 1992, A, 7:5. Markoff, J. "Hackers Indicted on Spy Charges". New York Times, Dec. 8, 1992, B, 13:1. Finn, Nancy and Peter, "Don't Rely on the Law to Stop Computer Crime," Computer World, Dec. 19, 1984, V18. Phrack Magazine issues 1-46. Compiled by Knight Lightning and Phiber Optik. Shannon, L R. "THe Happy Hacker". New York Times, Mar. 21, 1993, 7, 16:2. Sharp, B. "The Hacker Crackdown". New York Times, Dec. 20, 1992, 7, 18:3. Sullivan, D. "U.S. Charges Young Hackers". New York Times, Nov. 15, 1992, 1, 40:4. 2600: The Hacker Quarterly. Issues Summer 92-Spring 93. Compiled by Emmanuel G f:\12000 essays\technology & computers (295)\Computer Criminals.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ omputers are used to track reservations for the airline industry, process billions of dollars for banks, manufacture products for industry, and conduct major transactions for businesses because more and more people now have computers at home and at the office. People commit computer crimes because of society's declining ethical standards more than any economic need. According to experts, gender is the only bias. The profile of today's non-professional thieves crosses all races, age groups and economic strata. Computer criminals tend to be relatively honest and in a position of trust: few would do anything to harm another human, and most do not consider their crime to be truly dishonest. Most are males: women have tended to be accomplices, though of late they are becoming more aggressive. Computer Criminals tend to usually be "between the ages of 14-30, they are usually bright, eager, highly motivated, adventuresome, and willing to accept technical challenges."(Shannon, 16:2) "It is tempting to liken computer criminals to other criminals, ascribing characteristics somehow different from 'normal' individuals, but that is not the case."(Sharp, 18:3) It is believed that the computer criminal "often marches to the same drum as the potential victim but follows and unanticipated path."(Blumenthal, 1:2) There is no actual profile of a computer criminal because they range from young teens to elders, from black to white, from short to tall. Definitions of computer crime has changed over the years as the users and misusers of computers have expanded into new areas. "When computers were first introduced into businesses, computer crime was defined simply as a form of white-collar crime committed inside a computer system."(2600:Summer 92,p.13) Some new terms have been added to the computer criminal vocabulary. "Trojan Horse is a hidden code put into a computer program. Logic bombs are implanted so that the perpetrator doesn't have to physically present himself or herself." (Phrack 12,p.43) Another form of a hidden code is "salamis." It came from the big salami loaves sold in delis years ago. Often people would take small portions of bites that were taken out of them and then they were secretly returned to the shelves in the hopes that no one would notice them missing.(Phrack 12,p.44) Congress has been reacting to the outbreak of computer crimes. "The U.S. House of Judiciary Committee approved a bipartisan computer crime bill that was expanded to make it a federal crime to hack into credit and other data bases protected by federal privacy statutes."(Markoff, B 13:1) This bill is generally creating several categories of federal misdemeanor felonies for unauthorized access to computers to obtain money, goods or services or classified information. This also applies to computers used by the federal government or used in interstate of foreign commerce which would cover any system accessed by interstate telecommunication systems. "Computer crime often requires more sophistications than people realize it."(Sullivan, 40:4) Many U.S. businesses have ended up in bankruptcy court unaware that they have been victimized by disgruntled employees. American businesses wishes that the computer security nightmare would vanish like a fairy tale. Information processing has grown into a gigantic industry. "It accounted for $33 billion in services in 1983, and in 1988 it was accounted to be $88 billion." (Blumenthal, B 1:2) All this information is vulnerable to greedy employees, nosy-teenagers and general carelessness, yet no one knows whether the sea of computer crimes is "only as big as the Gulf of Mexico or as huge as the North Atlantic." (Blumenthal,B 1:2) Vulnerability is likely to increase in the future. And by the turn of the century, "nearly all of the software to run computers will be bought from vendors rather than developed in houses, standardized software will make theft easier." (Carley, A 1:1) A two-year secret service investigation code-named Operation Sun-Devil, targeted companies all over the United States and led to numerous seizures. Critics of Operation Sun-Devil claim that the Secret Service and the FBI, which have almost a similar operation, have conducted unreasonable search and seizures, they disrupted the lives and livelihoods of many people, and generally conducted themselves in an unconstitutional manner. "My whole life changed because of that operation. They charged me and I had to take them to court. I have to thank 2600 and Emmanuel Goldstein for publishing my story. I owe a lot to the fellow hackers and fellow hackers and the Electronic Frontier Foundation for coming up with the blunt of the legal fees so we could fight for our rights." (Interview with Steve Jackson, fellow hacker, who was charged in operation Sun Devil) The case of Steve Jackson Games vs. Secret Service has yet to come to a verdict yet but should very soon. The secret service seized all of Steve Jackson's computer materials which he made a living on. They charged that he made games that published information on how to commit computer crimes. He was being charged with running a underground hack system. "I told them it was only a game and that I was angry and that was the way that I tell a story. I never thought Hacker [Steve Jackson's game] would cause such a problem. My biggest problem was that they seized the BBS (Bulletin Board System) and because of that I had to make drastic cuts, so we laid of eight people out of 18. If the Secret Service had just come with a subpoena we could have showed or copied every file in the building for them."(Steve Jackson Interview) Computer professionals are grappling not only with issues of free speech and civil liberties, but also with how to educate the public and the media to the difference between on-line computer experimenters. They also point out that, while the computer networks and the results are a new kind of crime, they are protected by the same laws and freedom of any real world domain. "A 14-year old boy connects his home computer to a television line, and taps into the computer at his neighborhood bank and regularly transfers money into his personnel account."(2600:Spring 93,p.19) On paper and on screens a popular new mythology is growing quickly in which computer criminals are the 'Butch Cassidys' of the electronic age. "These true tales of computer capers are far from being futuristic fantasies."(2600:Spring 93:p.19) They are inspired by scores of real life cases. Computer crimes are not just crimes against the computer, but it is also against the theft of money, information, software, benefits and welfare and many more. "With the average damage from a computer crime amounting to about $.5 million, sophisticated computer crimes can rock the industry."(Phrack 25,p.6) Computer crimes can take on many forms. Swindling or stealing of money is one of the most common computer crime. An example of this kind of crime is the Well Fargo Bank that discovered an employee was using the banks computer to embezzle $21.3 million, it is the largest U.S. electronic bank fraud on record. (Phrack 23,p.46) Credit Card scams are also a type of computer crime. This is one that fears many people and for good reasons. A fellow computer hacker that goes by the handle of Raven is someone who uses his computer to access credit data bases. In a talk that I had with him he tried to explain what he did and how he did it. He is a very intelligent person because he gained illegal access to a credit data base and obtained the credit history of local residents. He then allegedly uses the residents names and credit information to apply for 24 Mastercards and Visa cards. He used the cards to issue himself at least 40,000 in cash from a number of automatic teller machines. He was caught once but was only withdrawing $200 and in was a minor larceny and they couldn't prove that he was the one who did the other ones so he was put on probation. "I was 17 and I needed money and the people in the underground taught me many things. I would not go back and not do what I did but I would try not to get caught next time. I am the leader of HTH (High Tech Hoods) and we are currently devising other ways to make money. If it weren't for my computer my life would be nothing like it is today."(Interview w/Raven) "Finally, one of the thefts involving the computer is the theft of computer time. Most of us don't realize this as a crime, but the congress consider this as a crime."(Ball,V85) Everyday people are urged to use the computer but sometimes the use becomes excessive or improper or both. For example, at most colleges computer time is thought of as free-good students and faculty often computerizes mailing lists for their churches or fraternity organizations which might be written off as good public relations. But, use of the computers for private consulting projects without payment of the university is clearly improper. In business it is the similar. Management often looks the other way when employees play computer games or generate a Snoopy calendar. But, if this becomes excessive the employees is stealing work time. And computers can only process only so many tasks at once. Although considered less severe than other computer crimes such activities can represent a major business loss. "While most attention is currently being given to the criminal aspects of computer abuses, it is likely that civil action will have an equally important effect on long term security problems."(Alexander, V119) The issue of computer crimes draw attention to the civil or liability aspects in computing environments. In the future there may tend to be more individual and class action suits. CONCLUSION Computer crimes are fast and growing because the evolution of technology is fast, but the evolution of law is slow. While a variety of states have passed legislation relating to computer crime, the situation is a national problem that requires a national solution. Controls can be instituted within industries to prevent such crimes. Protection measures such as hardware identification, access controls software and disconnecting critical bank applications should be devised. However, computers don't commit crimes; people do. The perpetrator's best advantage is ignorance on the part of those protecting the system. Proper internal controls reduce the opportunity for fraud. BIBLIOGRAPHY Alexander, Charles, "Crackdown on Computer Capers," Time, Feb. 8, 1982, V119. Ball, Leslie D., "Computer Crime," Technology Review, April 1982, V85. Blumenthal,R. "Going Undercover in the Computer Underworld". New York Times, Jan. 26, 1993, B, 1:2. Carley, W. "As Computers Flip, People Lose Grip in Saga of Sabatoge at Printing Firm". Wall Street Journal, Aug. 27, 1992, A, 1:1. Carley, W. "In-House Hackers: Rigging Computers for Fraud or Malice Is Often an Inside Job". Wall Street Journal, Aug 27, 1992, A, 7:5. Markoff, J. "Hackers Indicted on Spy Charges". New York Times, Dec. 8, 1992, B, 13:1. Finn, Nancy and Peter, "Don't Rely on the Law to Stop Computer Crime," Computer World, Dec. 19, 1984, V18. Phrack Magazine issues 1-46. Compiled by Knight Lightning and Phiber Optik. Shannon, L R. "THe Happy Hacker". New York Times, Mar. 21, 1993, 7, 16:2. Sharp, B. "The Hacker Crackdown". New York Times, Dec. 20, 1992, 7, 18:3. Sullivan, D. "U.S. Charges Young Hackers". New York Times, Nov. 15, 1992, 1, 40:4. 2600: The Hacker Quarterly. Issues Summer 92-Spring 93. Compiled by Emmanuel Goldstein. f:\12000 essays\technology & computers (295)\Computer Ergonomics in the Workplace.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Business strive for high production at low cost. This would result in the highest profit for a company. To many businesses, this is only a mirage. This is because the 'low cost' of the business usually results in a 'high cost' for the employees. This high cost is lower quality workplace items, lower salaries, less benefits, etc. These costs create an upset workplace environment. Companies understand that the more efficient their workers are, the more productive their business will become. Although this will take lots of money at first, the result will be extreme success. There exist many different things in the workplace that add to stress and injuries. They range from lifting heavy boxes to typing too much on the keyboard. This paper will be focusing on the principals of ergonomics in the computer workstation. According to the Board of Certification for Professional Ergonomists (BCPE), the definition of ergonomics "is a body of knowledge about human abilities, human limitations and human characteristics that are relevant to design. Ergonomic design is the application of this body of knowledge to the design of tools, machines, systems, tasks, jobs, and environments for safe, comfortable and effective human use."(BCPE, 1993) In the average computer workstation, employees are prone to over a dozen hazards. There exist two factors that can prevent this: forming good work habits and ergonomically designed computer workstations. We will discuss these preventions throughout the paper. First, a few terms may need defining. Repetitive Strain Injuries (RSI) takes place from the repeated physical movements of certain body parts which results in damage to tendons, nerves, muscles, and other soft body tissues. If these injuries are not taken care of immediately, permanent damage could be done. A few common results of RSI's that were not taken care of right away are injuries like Carpal Tunnel Syndrome, Tendentious, Tenosynovitis, DeQuervain's Syndrome, Thoracic Outlet Syndrome etc. All of these are able to be prevented by the use of good working habits and ergonomic engineering.i Usually, ergonomically designing a computer workstation would cost about $1000. This expense could be eliminated by the formation of good work habits. This is essential for the safety of computer terminal employees. There exist a number of precautions that can be taken into consideration when dealing with a computer workstation. We shall discuss six of them. First, the whole body must be relaxed. The correct posture is shown in Figure 1. Notice that the arms and thighs are parallel to the floor and the feet are flat on the floor. Also notice that the wrists are not bent in any way. This is one of the most damaged parts of the body when speaking of (RSI). Figure 1 The wrists, when typing, should not be rested on anything when typing. This would cause someone to stretch their fingers to hit keys. They should also be straight: not bent up, down, or to the side. The correct position is portrayed in figure 2, incorrect in figure 3. Studies show that these steps are easier to perform while the keyboard is not tilted toward the user. When it is tilted, it is natural to rest your wrists on the table. This puts the keyboard at a lower level, creating a more natural position. Another practice that should be taken into consideration is how hard you press on the keys. The user is not supposed to hit the keys. This may cause damage to the tendons and nerves in the fingers. Instead, use a soft touch, not only will your fingers thank you for it, the keyboard will too! Keeping in mind not to stretch your fingers when typing, use two hands to perform double-key operations. For example, you need to capitalize the first letter in every sentence, therefore, you would hold down the shift and press the first letter. Figure 2 Figure 3 This is a double key operation. Instead of stretching two fingers on one hand to do this operation, use both hands. No matter what kind of a pace you are on when doing work, take breaks every ten minutes or so in addition to your hourly breaks. These breaks need only be a few moments at a time. If breaks are not taken at this pace, you may be subjecting yourself to injuries in the back, neck, wrists and fingers. Also, when using the mouse, do not grip it tightly. Most mice that are used in offices today are not designed with human factors in mind. Some mice, like the Microsoft mouse, are designed to fit the contour of your hand. Although this may seem nice, it does not mean that one will be able to use it for hours on end and not feel any discomfort in the hand. Other mice, that will be mentioned later, are designed for comfortable use for extended periods of time. Try to keep your arms and hands warm. Cold muscles are more apt to strain and injury than warm ones. Wearing a sweater or a long-sleeved shirt can be of great importance especially when working in air-conditioned offices. And finally, do not use the computer more than necessary. Your body can handle only so much strain on the neck, shoulders, wrists and fingers. Even with the greatest state-of-the-art ergonomically designed computer workstation, people put themselves at risk. Some people tend to use their break times at work playing video games. This is a good way to ease the mind of everyday pressure (to some extent). This is also a good example of using the computer 'more than necessary'. If a person needs to use a computer for video games, take a break every ten minutes or so, as mentioned above.ii All of these strategies mentioned above are things that can be done to reduce injuries when using a computer for an extended period of time. They do not include any type of ergonomically designed hardware. If employees form these habits, there would be less need to purchase any ergonomic equipment for the office. But, making new habits is not the easiest thing to do for most people. Next, we will take a look at how a computer workstation should be set up. The following data was retrieved by an on-line quiz from the University of Virginia. The first question about computer workstations poses a question about the seat being too high. This would cause strain on the legs of the operator causing them to "go to sleep". Basically, the blood flow to the leg and feet will be cut off. The next fact presented to us is that the top of the Video Terminal Display (VDT) should be no higher than eye level. This is one of the most controversial topics because it deals with the neck and shoulders. Some people state that it should be below, but not at eye level because our natural tendency is to look down. Thirdly, the best viewing distance from the VDT is about 24 inches from the screen. This deals with eye strain. Some people worry about radiation that may be emitted from the VDT. Radiation is not a big problem with newer monitors. Even old ones have a protective coating around the screen. This allows very few particles to go through the screen. Even if they do manage to get that far from the screen, the radiation goes inches before withering away. The eye strain is the important factor here. Look away at an object far away from you if eye strain continues to be a problem. The next question deals with the tilt of the screen. If the monitor should be at or below eye level, it would be easier to read with a 10 to 20 degree back tilt. Many VDT's have a tilt on the bottom, if not, a book could propped under the monitor to tilt it back a bit. Another question asked is about the height of the keyboard from the floor. It should be elbow height. As mentioned before, the fore-arms and thighs should be parallel to the floor. This is possible only if the keyboard is elbow height from the floor. How should the lighting be in offices when using a computer? It should be a bit dimmer than normal office lighting. This is so because if the office lighting is brighter, there will be a lot of glare on the screen. It also has to do with eye strain. Noise in the work area causes fatigue. This may be true, to add to this statement, it also causes the computer operator to lose concentration on their work. Not only does noise affect our concentration and causes fatigue, it obviously can damage one's hearing. Using this questionnaire, I conducted a survey among students at Canisius College in Buffalo, NY. The purpose of the survey was to test the knowledge of the student body as to their knowledge of VDT's and their safety precautions. In order to accomplish this in a professional manner, a random sample of students was acquired. In order to obtain a random sample, certain criteria must be met, too numerous to mention in this essay. Needless to say, not all of the criteria were met for the sample to be random. The sample size of the survey was approximately 100 students. The results were not surprising. There was one problem with the questionnaire : many students did not know what VDT meant1. According the survey, 100% of the people was familiar with what ergonomics is, knew how to reduce tension, what movement in your peripheral vision does, and what you should do if you should wear bifocal lenses. This question posed a problem because of the way in which the answer was worded. The correct answer is very specific, and sticks out over the other possible answers. The rest of the questions were well worded and not too obvious. Besides the first and last question, there were a few others that were all answered correctly. These were questions eleven and twelve. The probable cause for this is that the questions were easy. The answers were more obvious than the others. If you compare these questions to the ones that were more difficult (seven and thirteen) the percent correct differ. Questions seven and thirteen deal with very specific measurements that are all closely related. These questions are not 'common knowledge' questions. I am assuming that people were taking educated guesses when encountering these questions. This could be the reason for the large percent of error in these parts of the survey. Now that we have discovered the good habits to form when working at computer workstations and took a look at what a selected college student population knew about VDT's, we will now take a look at ergonomic engineering and the reason for its emergence. There are a number of devices ranging from keyboards and mice to chairs and even foot stands. In this paper we will just review a few of these ergonomically designed items and why ergonomics is an issue to computer users. First, we will discuss the purpose of ergonomically designed items. There are a number of reasons for the emergence of ergonomics. One reason for this is insurance purposes. Many companies have disability and other types of insurance to cover injuries that occur while working. This would not be needed as much if there were ergonomically designed computer workstations. It would save the company insurance hassle and money in the long run. Another purpose for the emergence of ergonomically designed workstations is that the injuries due to the overuse of the computers are long lasting. These ailments do not just go away in time. And one can not put a price on injuries like this. This is why ergonomics is so important. Secondly, we will look at an item that effects the common computer user the most: the keyboard. With computers getting faster and faster every day, it is about time that people looked at the hazards they pose instead of perfecting them. Keyboards pose the largest threat to the computer user, not only because it is the most used input device, but also because of its design. It is a flat, straight input device that can cause strain and injury to the user if not used properly. Ergonomic engineers realized this hazard and designed a number of different alternatives. All of the ergonomically designed keyboards attempt to reduce injuries by studying the natural position of the fingers, hands and wrists. By using this knowledge, keyboards and mice are designed. There is no ideal position for the hand as of yet. Hence, there exists different types of keyboards and mice. Figures 4 - 5 show different styles of keyboards and mice. Figure 4 - http://www.earthlink.net/~dbialick/kinesis Figure 5 Notice the unique structure of the keyboard. It does not even look like one. This may take time to get used to, but it will payoff in the end. Not only is there hardware for the reduction of RSI, but there exists software to help you reduce the RSI. Micronite softwareiii designed a program called ARMS (Against Repetitive Strain Injury) Which reminds you when it is time to take a break. Also, it walks you through a series of videos which portray ways to massage different parts of your hand, neck, and shoulders. With all of this hardware and software available for business and personal use, who would not be interested? Well many people think that it will not happen to them until it does. People should not wait that long. If you use a computer for more than four hours a day, you are prone to RSI. If your company does not have ergonomically engineered hardware, software or furniture, then do something about it. It's your health. 1 A copy of the survey is attached to the end of this paper. The correct answer is bolded. i URL address : http://webreference.com/rsi.html#whatis ii URL address : http://www.engr.unl.edu/ee/eeshop/rsi.html iii URL address : http://www.micronite.com/ Glossary CGI "Common Gateway Interface". A standard protocol which allows HTML based forms to send field contents to a program on the Internet for processing. It also allows the receiving program to respond by sending an HTML response document. Email "Electronic Mail". An electronic document similar to a piece of mail in that it is sent from one person to another using addresses, and contains information. Email commonly contains information such as: sender name and computer address, list of recipient names and computer addresses, message subject, date and time composed, and message content. Sometimes, an Email message can have attached computer files such as pictures, programs, and data files. Firewall A program or device which serves as an intelligent and secure router of network data packets. These mechanisms are configured to restrict the flow of packets in different directions (i.e. to and from the Internet) based on the system addresses (a.k.a. IP addresses) of the connected computers. FTP "File Transfer Protocol". A program or feature popularly used over the Internet to transfer files between computers. Hacker A person who deliberately breaks into computer systems for entertainment, gain, or spite. The most sophisticated hackers spend all of their time breaking into computers. The risk that these people pose is that they often steal or damage software systems and information. Home Page A Web Page which is at the root of all Web Pages for a particular Web Site. A Home Page should portray the image that the company wants to project. Usually, these pages resemble marketing slicks, but with an interactive slant. This front page of a Web Site then provides hypertext links to the rest of the Web Site's content and possibly to Home Pages for other related Web Sites. HTML "HyperText Markup Language". A standardized programming language used to create hypertext documents. Used to create all Web Pages on the Internet. Also allows definition of data forms which communicate with CGI compatible programs on the Internet. HTTP "HyperText Transfer Protocol". A communications protocol used by Internet Web Service software to send Web Pages to Web Browser software over the Internet. HyperText A type of text document which contains embedded "hotspots" which point to other sections of text or other documents. Any piece of text or graphic can be defined as a hotspot which points elsewhere. Internet (a.k.a. "The Information Superhighway"). A world-wide interconnection between thousands of computer networks on many different platforms, with over 10 million end users (and growing). The telecommunications backbone of the Internet is based on a network of U.S. government owned, national T3 lines. A growing number of Internet Providers are adding their own backbones. Internet Providers A community of competing businesses which provide "on-ramps to the Internet". The largest of these companies connect directly into the Internet backbone, or provide their own national or international backbones. Examples of true Internet Providers: Netcom, UUNet, CERFNet, SprintNet, and Spry. Examples of partial Internet Providers & partial Information Service Providers: CompuServe, Prodigy, and America On-Line. IRC "Internet Relay Chat". A program or feature popularly used on the Internet by individuals to chat with others, by typing and watching text-based dialog. Many topic specific IRC channels have been created on the Internet by users. These channels form a sort of forum for conference room discussion. Newsgroups A collection of forums which gather Email from Internet users about a specific subject. The collected Email entries (known as news articles) can then be perused by all Internet users. Some are simply for recreational discussions, while others may allow people to form self-supporting user groups. PGP "Pretty Good Privacy" encryption. A protocol for using private and public key encryption to secure Email and other Internet transactions. TCP/IP "Transfer Control Protocol / Internet Protocol". The network communication protocol used by all Internet computers. Similar in function to NetBIOS, SNA, or Novell Netware's IPX/SPX. Telnet A program or feature popularly used on the Internet by individuals to log into, and take control of other computers on the Internet. VRML "Virtual Reality Markup Language" A new emerging language becoming supported by the World Wide Web, for programming virtual reality content on the Internet. Web Browser A type of program used by individuals which reads HTML files on the Internet and presents them to the user in a friendly way and interactive way. Many such programs exist for many platforms. For UNIX several GUI browsers are popular. For those UNIX based terminals or DOS based PCs, Lynx provides a text interface to browse Web Pages. All Web Browsers allow the user to interactively jump from place to place by selecting hotspots (highlighted text or graphics). Some browsers allow the user to print page contents. Web Page or Web Document A single viewable unit of Web information. Often be comprised of an HTML file with several referenced graphics files. Generally, each Web Page has hypertext links to other Web Pages. Web Site A collection of Web Pages built for or by a single company or individual. Usually provides one theme of content. A Web Site is not to be confused with a single physical location where a Web Server exists. It is a Cyber-Location. Web Server A combination of computer hardware, telecomm. lines, and HTTP server software. World Wide Web, WWW, or The Web An intricate and vast web of information, tied together by hypertext links between multimedia documents residing on thousands of Internet computers around the globe. f:\12000 essays\technology & computers (295)\Computer Languages.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Languages By Nicholas Soer Differences in computer languages is a topic that many people are not familiar with. I was one of those kinds of people before I started researching on this topic. There many different computer languages and each one of them are similar in some ways, but are also different in other ways, such as: program syntax, the format of the language, and the limitations of the language. Most computer programmers start programming in languages such as turbo pascal or one of the various types of basic. Turbo pascal, Basic, and Fortran are some of the oldest computer languages. Many of today's modern languages have been a result of one of these three languages, but are greatly improved. Both turbo pascal and basic are languages that are easy to understand and the syntax is very easy and straightforward. In Basic when printing to the screen you simply type the word 'print', in turbo pascal you would type 'writeln'. These are very simple commands that the computer executes. To execute a line of code in a language such as C, or C++, you would have to type in much more sophisticated lines of code that are much more confusing than the previous two. The format and layout of the various languages are very diverse between some, and between others are somewhat similar. When programming in Basic the user has to type in line numbers before each new line of code. In an updated version of Basic called QBasic, numbers are optional. Turbo pascal does not allow the user to input numbers, it has preset commands that seperate each part of the program. This is similar to QBasic, but is much more sophisticated. Instead of using the command gosub in Basic, the user would make a procedure call. Another new language is C. C is a spinoff of turbo pascal but is capable of doing more things than turbo. The format and layout are similar, but the syntax is much more complex than turbo is. When C first came out, there were many major flaws in the language so a new version had to be put out, C++. The main addition from C to C++ is the concept of classes and templates. Many other small flaws were fixed when this new version of C came out also. Many of the languages have different limitations on the tasks that they came perform. The newer the language, the more things you can do. Things that are being accomplished today, were thought to have been impossible 20 years ago. Despite the differences between the many languages I have mentioned, and the others that I have not, the limitations are starting to go higher and higher as technology improves. This is a subject that one could write on and on about the minute differences between the many languages. After researching these main languages I found out that there are just as many simmilarities between languages as there are differences. f:\12000 essays\technology & computers (295)\Computer Literacy.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ For over fifty years, beginning with the famous ENIAC, a revolution has been taking place in the United States and the world. The personal computer has changed the way many people think and live. With its amazing versatility, it has found its way into every area of life, and knowing how to operate it is a requirement for today's world. Those who have not taken the time to learn about computers often do not even know what to do once one has been turned on, and this problem should be corrected. That is why all high schools must make a computer literacy course a requirement for graduation. Although a computer course would take away two or three periods of a high school student's weekly schedule, it will be well worth it in the real world. With so many careers today involving a knowledge of a computer's basic functions, computer literacy plays a big part in job security. If a potential employee comes along demonstrating outstanding computer skills, he or she may take a job that formerly belonged to another employee if that employee doesn't even know how to check his e-mail. A good computer class would teach the basics of computers: typing a document in a word processor, running a specified program, and using a modem to check e-mail and access the Internet. Personal computers now have a tremendous entertainment value due to their versatility. Not only can a computer do all the things that are unique to computers, it can be a television and a radio as well. Computers have also attracted millions of people with games galore. Immersive, three-dimensional games such as Doom 2, Quake, and Duke Nukem 3D can keep people glued to their computers for hours. With current technology, two friends can connect from anywhere in the world via modem and play a blazing fast two-player game against one another. With the recent emergence of the Internet, friends that would normally have to pay 25 cents a minute to talk on the phone long distance can play and talk as long as they want for free. The most important reason for required computer classes, however, is the enormous amount of information available on the Internet. The Internet is a 24 hours a day, 7 days a week information resource that cannot be beaten by any library in the world. An experienced user can connect and find the information that he is looking for in as little as ten minutes, without leaving the comfort of his own home. The Internet will only continue to grow as time passes, and being able to navigate quickly and successfully is becoming more and more important. A computer course is an advantageous investment in a student's future with today's technology. A personal computer is the most diverse machine in the world and being familiar with its uses is a must to be successful. The amount of practical application that it will have is astounding, and it will make all students more successful in today's changing world. f:\12000 essays\technology & computers (295)\Computer Nerds.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ COMPUTER NERDS A computer nerd is a person uses a computer in order to use one. Steve Wozniak fell in love with computers and how they worked. He built the first computer, the Apple one. The Apple one formed the basis for the future of Apple Computer, Inc. Steve Wozniak also designed the Apple II, the first ready made computer and one of the most popular ever made. It was a complete computer with keyboard and power supply. After he retired from Apple, Steve returned to the University of California at Berkeley and got his bachelor's degree in Computer Science. Steve Jobs was the co-founder of Apple Computers. At the age of 25 he was worth over 100 million dollars. He was fascinated by the effects of computers. He was also amazed that a computer could take your ideas and translate them into information. He and Wozniak created the printed circuit board for the Apple I computer. Bill Gates started programing at the age of 13. When he was a student at Harvard University, he developed BASIC for the first microcomputer, the Altair. Gates believed that there would be a personal computer in every household. Gates and Paul Allen formed Microsoft in 1975. Today Gates is a very important leader in Microsoft. Paul Allen was also a co-founder of Microsoft. He bought a chip from a store and brought it back to Bill Gates and then they called their friends. They loaded BASIC into the computer and it worked by printing out the memory size. Paul left Microsoft in 1983 after an illness. f:\12000 essays\technology & computers (295)\Computer Pornography.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof or abridging the freedom of speech, or of the press, or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.(Wallace: 3) A statement from a document that a group of individuals put together to ensure their own ideas and beliefs would never change. The group of people was the forefathers of the United States of America and that document: The United States Constitution. That phrase was put into the Constitution because our forefathers wanted to protect their freedom of speech. Something they cherished and something that in days previous was squashed by ruling government. Today our freedom of speech is in danger again. The Government is now trying to censor what ideas go onto something we know as the Information Superhighway. The Internet is now supposed to be regulated so that it will be "safe" for everyone to enter. The Government passed a law known as the Telecommunications Act of 1996. In the TA there is a part called the Communications Decency Act or CDA. This part of the bill arose because of the recent surge of pornography on the Infobahn. The CDA criminalizes indecent speech on the Internet(Wallace: 1). The CDA describes indecent speech as anything "depicting or describing sexual or excretory acts or organs in patently offensive fashion under contemporary community standards." First take the word "indecent". This word is used because of its vague definition. Not only does this word ban sexually explicit materials, it also bans sexually explicit words too. If this were applied to the real world some of the greatest novels would be taken off the shelf. For example there is the great lesbian novel The Well of Loneliness by Radcliffe Hall. In that book there is a line t hat states "And that night, they were not divided." Clearly that would be a sexually explicit phrase(Wallace: 2). Now the words "depicting or describing". The word "describing" translates into anything with pure explicit text. That would include any book converted and placed on the Internet with outspoken words or phrases. This goes against the first amendment. Henry Miller's Tropic of Cancer and James Joyce's Ulysses would not be able to possibly be posted online(Wallace: 2). "Sexual or excretory acts or functions": This would relieve anything from sleazy bestsellers to 19th century classics, such as Zola's LaTerre and Flouber's Madam Bovary, to nonfiction books on disease, rape, health, and sexual intercourse from our shelves. This phrase is again unconstitutional(Wallace: 2). Another phrase in there is "Patently Offensive". This is very subjective. These words mean that a jury can decide on what is offensive and what is not (Wallace: 2). If there is a very conservative jury you get a very conservative verdict, but in the same respect if you get a very liberal jury you get a liberal verdict. Would that be considered a fair trial? And last "Contemporary community standards". There is an easy example to understand under these words. In 1994 two California sysops [system operators] were found guilty of putting offensive material on their BBS [Bulletin Board System]. Their BBS was accessible by people all over the world as long as whoever wanted the information called the California number they had setup. Someone one day out of Memphis, Tennessee called the number and found something disturbing to themselves. The two sysops were convicted because of the community standards in Tennessee not the ones in California(Wallace: 3). There is no reason to treat the electronic and written word different especially because of the big conversion(Wallace: 3). More and more often people are looking to the Internet to do reports and research. It is one of the biggest resources in the world today. If the TA bill stays in effect many of the books listed will not be downloadable. Mark Managan co-author of the book Sex states, " A law burning books by Miller, Joyce, Burroughs, and Nabokor might also protect children who might get a hold of them, but would be completely unconstitutional under the First Amendment (Wallace: 4)." In 1994 a United States survey showed that 450,000 pornographic pictures and text files were accessible on the Net around the world and that these files were accessed more than 6 million times(Chidley: 58). This is one reason why the government passed the CDA. The Government rationalizes the CDA because of two reasons. One, the protection of children. Two, they claim it is constitutional because the Internet is like a telephone or TV and can be regulated. The protection of children is not an issue the Government should handle. Proponents of the CDA have completely forgot that a credit card number is needed to be given to an ISP [Internet Service Provider] to get connected to the Net(Wallace: 4). Passwords may be added security. Parents let their children "veg out" in front of the TV all day so of course you would figure that those same parents are going to let them surf the net when they want to(Bruce: 3). Donna Rice Hughes, formerly with Sen. Gary Hart but now a born again Christian and President of Enough is Enough!, anti-pornography on the Net organization, states, " Any child can access it . and once they've seen it, it can't be erased from their minds(Jerome: 51)." First, modem communication on a phone line is just static. A computer, modem, communications software, and Internet access is needed. This a child cannot purchase. Second, there are many securities on a computer so that a child cannot access certain parts of the home system(Lohr: 1). If the parent is responsible enough they should know more about the PC they purchased than their child. Third, This quote sums up the biggest argument: " And it is not as if cybersurfers are inundated with explicit images. Users have to go looking for the images in the unorganized and complex network, and even need special decoders" to translate what is written into a file(Chidley: 58). Jeffery Shallit associate Professor at the University of Waterloo in Ontario and Treasurer of Electronic Frontier Canada, an organization devoted to maintaining free speech in Cyberspace, says, "Every new medium of expression will be used for sex. Every new medium of expression will come under attack, usually because of." the previous sentence(Chidley: 58). If the regulation passes there will just be another way of getting around it. One example is encryption. This is a form of false information sent to another person via the Net and translated on the other side. As Internet pioneer John Gilmore once said, " The Net interprets censorship as damage and routes around it(Barlow: 76)." I decided to try "trading" myself and was startled when I completed two online interviews with some known traders. The two persons "nicknames" I talked to were GMoney and BigGuy. First I needed to get on the chatlines. I downloaded a program called MIRC [My Internet Relay Chat]. This program is free. It downloaded in a matter of minutes. It was very easy to setup and before I knew it I was on an IRC channel. If a child new of this program it would have been very easy for them to access the channel I was on. The channel I was on was called !!!!SEXPIX!!!!. The side bar noted: " All the pics you want from horses to grandmas. " I decided this would be a good place to start. Inside the channel there were 27 other people. You can talk to each one individually or talk as a whole if you like. It's like sitting in a circle in a room full of strangers. The first of the two interviews I did was with GMoney. I first asked how often he traded pictures. He said usually once or twice a day. He told me he tried to do it fast so his mother wouldn't catch him. So I immediately asked how old he was. He replied, "13/M I guess I shouldn't be doing this but I just think these things are cool. Once I started I can't stop now. People are so f_cked up it's unreal." I then asked why he traded and he responded, " I think its just to see what screwed up things are really going on." I also asked if he would try anything he saw in the pictures. He wrote, "God no you see what goes on. I would never do any of that weird sh_t. Now some of the things I see being done to girls. I think I'll enjoy... I don't think that's that bad though." The other interview with BigGuy was not much better. BigGuy was a 25 year old female. She said that her husband was the one who usually did it and he ran a web page with pornography on it. When asked what she thought of the CDA she typed," It's ridiculous how could anyone think that censorship could stop the trading of pornography on the Internet." I later asked if they some how had a check to see if minors would access their web page. She responded," No I wont let him. We have a theory. We ask for their email address. They must have one. We then email them and tell them the password to get into the board. We figure that the children won't let us email them in case their parents find the letter. It's not fool-proof but it stops some of it. " The CDA hits smaller ISPs harder that the larger ones because of the different types of users on each system(Emigh: 1). The bill has good points and bad ones. Steve Dasbach, Libertarian Party Chair, states that, " This bill is censorship. This bill threatens to interrupt and curb the rapid evolution of electronic information systems. This bill isn't needed. This bill usurps the role of parents("CDA: LP calls new bill `high- tech censorship'.": 1)." Clifford Stoll, a renown Internet scientist and author of the 1989 bestseller The Cuckoo's Egg, when asked "Are you concerned about the abundance of pornography on the Net?" said: Well, I can't get worked up over it. Some people say, `Oh no, my kid just downloaded this image that has explicit sex in it.' Yeah, sad to say, it's true. Sad to say that just like every place in society, there are reptiles who will exploit children. Certainly, the child molester will find a way to use the computer networks to find victims-just as child molesters take advantage of cars and ordinary roadways to get around. But the concerns with cars and roadways go deeper than simply the fact that child molesters use them(Chidley: 59). The computer industry describes the CDA as unconstitutionally vague and it subjects computer networks to more restrictive standards than any form of written work such as books, magazines, and other printed materials(Chidley: 59). When it comes to anything basic ethics are broken everyday whether it be in business, on the Internet, or in your own home (Lester: 1). There will always be someone who finds a way around the rules. The CDA, as written, gives no guidance but instead tries to ban Internet pornography (Wallace: 1). As stated by Steve Dasbach, "The Communications Decency Act is a case of 20th-century politicians using 19th-century laws to control 21st- century technology("CDA: LP calls new bill `high-tech censorship'.": 1)." Two easy cures for this unorganized, uncensored, uncontrollable Internet are: First, Promoting the use of child safe Internet Service Providers and second, the use of local screening software(Wallace: 5). The Government should not be responsible for censorship. If so they must do it as a whole and this would be unconstitutional. Eliminate the problem by choice not by force. Works Cited BigGuy. Online Personal Interview. washington.dc.us.undernet.org/port=6667 (20 Jun. 1996). Bruce, Marty. "Censorship on the Internet." Censorship on the Internet. 1996. (29 Jun. 1996). "CDA: LP calls new bill `high-tech censorship'. " Libertarian Press July 1995. (29 Jun. 1996). Chidley, Joe. "Reality Check." MacLean's 22 May 1995: 59. Chidley, Joe. "Red-Light District." MacLean's 22 May 1995: 58. Emigh, Jacqueline. "Computers & Privacy - Telecom Act Hits ISPs Hard 04/02/96." Computers & Privacy. 02 Apr. 1996. (18 Jun. 1996). GMoney. Online Personal Interview. washington.dc.us.undernet.org/port=6667 (20 Jun. 1996). Jerome, Richard and Linda Kramer. "Monkey Business No More." People Weekly 19 Feb. 1996: 51+. Lester, Meera. "What's Your Code of Ethics?" _VJF_Library_Career_Resources: What's Your Code of Ethics? 1996. (29 Jun. 1996). Lohr, Steve. "Censorship on the Internet: Pre-emptory Effort At Self-Policing," New York Times 13 March 1996, sec. C: 3. Wallace, Jonathan and Mark Mangan. "The Internet Censorship FAQ." The Internet Censorship FAQ. 1996. (29 Jun. 1996). f:\12000 essays\technology & computers (295)\Computer Programming.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Programming Choosing to do a research on a career can be a little easier to do when you have some or a general knowledge a particular field of work. There are many different types of jobs one can decide to undertake, one of which is in the most popular line of work today: Computer Programming. Although this line of work might seem a little tiresome but you might find it enjoyable by people with lots of patience and the will to do long and tidious work. Most programmers in large corporations work in teams, with each person focusing on a specific aspect of the total project(AOL). Programmers write the detailed instructions for a computer to follow. A computer programmer carefully studies the program that best suits the employer needs. They may also work for a large computer corporation developing new software and/or improving older versions of these programs. Programmers write specific programs by breaking down each step into a logical series of hours of writing programs, the programmer must follow. After long hours of writing programs, the programmer must thoroughly testing and revising it. Generally, programmers create software by using the following a basic step-by-step development process: (1) Define the scope of the program by outlining exactly what the program will do. (2) Plan the sequence of computer operations, usually by developing a flowchart (a diagram showing the order of computer actions and data flow). (3) Write the code--the program instructions encoded in a particular programming language. (4) Test the program. (5) Debug the program (eliminate problems in program logic and correct incorrect usage of the programming language). (6) Submit the program for beta testing, in which users test the program extensively under real-life conditions to see whether it performs correctly(AOL) Programmers are grouped into two types: Application programmers and systems programmers. These programmers write the software that changes a basic machine into a personal tool that not only is useful for increasing productivity but also be fun and entertain the user. Applications programmers write commercial programs to be used by businesses, in scientific research centers, and in the home. Systems programmers write the complex programs that control the inner-workings of the computer. Application programmers are focused primarily on business, engineering, or science tasks, such as writing a program to direct the guidance system of a missile to its target (Information Finder). A systems programmer maintains the software that controls the operation of the entire computer system. They make changes to the instructions that controls the central processing unit, in turn, controls the computers hardware itself(FL View #475). They also help application programmers determine the source of problems that may occur with their programs. Many specialty areas exist within these two large groups, such as database and telecommunication programmers. Computer programmers can attend really any college or school because the employers needs vary. All programmers are college graduates and have taken special courses in the programming field. Most employers prefer experience in accounting, inventory control and other business skills. Employers look for people who can think logically and can have patience when doing analytical work(Information Finder). Then entrance salary of a new fresh out of college computer programmer ranges in the area of $30,000 in 1989(Occ. Outlook Handbook 115). The little more experienced programmers that have five to ten years experience earn about $40,000+ annually, but the professionals get nearly $60,000 per year (S.I.R.S. CD-ROM). Employers are looking for ways to cut costs, and minimizing on-the-job training is one way to do that. Many employers prefer to hire with previous experience in the field. To have the best chance of becoming a skilled computer programmer they must learn many computer languages to land the job of their choice. The Shuttle program, for example, consist of a total of about half a million separate instructions and were written by hundreds of programmers.) For this reason, scientific and industrial software sometimes costs much more than do the computers on which the programs run. Programmers work mostly at a desk in front of a computer all day. They usually work between 40 to 50 hours a week and more if they have to meet crucial deadlines. Programmers might arrive at work early or work late occasionally, depending on the circumstances at the work place. The employment outlook of the computer programming field is very good and growing fast through the year 2000(Occ. Outlook Handbook 115). Most of the job openings for programmers will probably result from replacement needs. The need for computer programmers will increase as business, government, schools, and scientific organizations seek new applications for computer software and improvements already in use. The computer programming field is not an easy line of work to be successful in nor is it a easy one to get into. This job requires a lot of demands as a person such as: working late hours, writing complex programs that sometimes don't always work properly, the patience, and the time needing to be a successful computer programmer. Works Cited Florida View 1990: Careers Black & White. Florida Dept. of Education. 1990, occ. #475. Florida View 1991: Careers Black & White. Florida Dept. of Education. 1990, occ. #362. Information Finder by World Book. Chicago .World Book, Inc., 1992. Occupation Outlook Handbook. 1990-91 edition; United States Department of Labor, 1991. Social Issues Resources Series. SIRS Combined Text & Index, 1993 SIRS, Inc. Spring 1993. America Online Database. America Online, Inc. 1995. f:\12000 essays\technology & computers (295)\Computer Protection.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ About two hundred years before, the word "computer" started to appear in the dictionary. Some people even didn't know what is a computer. However, most of the people today not just knowing what is a computer, but understand how to use a computer. Therefore, computer become more and more popular and important to our society. We can use computer everywhere and they are very useful and helpful to our life. The speed and accuracy of computer made people felt confident and reliable. Therefore, many important information or data are saved in the computer. Such as your diary, the financial situation of a oil company or some secret intelligence of the military department. A lot of important information can be found in the memory of computer. So, people may ask a question: Can we make sure that the information in the computer is safe and nobody can steal it from the memory of the computer? Physical hazard is one of the causes of destroying the data in the computer. For example, send a flood of coffee toward a personal computer. The hard disk of the computer could be endangered by the flood of coffee. Besides, human caretaker of computer system can cause as much as harm as any physical hazard. For example, a cashier in a bank can transfer some money from one of his customer's account to his own account. Nonetheless, the most dangerous thief are not those who work with computer every day, but youthful amateurs who experiment at night --- the hackers. The term "hacker "may have originated at M.I.T. as students' jargon for classmates who labored nights in the computer lab. In the beginning, hackers are not so dangerous at all. They just stole computer time from the university. However, in the early 1980s, hackers became a group of criminals who steal information from other peoples' computer. For preventing the hackers and other criminals, people need to set up a good security system to protect the data in the computer. The most important thing is that we cannot allow those hackers and criminals entering our computers. It means that we need to design a lock to lock up all our data or using identification to verify the identity of someone seeking access to our computers. The most common method to lock up the data is using a password system. Passwords are a multi-user computer system's usual first line of defense against hackers. We can use a combination of alphabetic and number characters to form our own password. The longer the password, the more possibilities a hacker's password-guessing program must work through. However it is difficult to remember a very long passwords. So people will try to write the password down and it may immediately make it a security risk. Furthermore, a high speed password-guessing program can find out a password easily. Therefore, it is not enough for a computer that just have a password system to protect its data and memory. Besides password system, a computer company may consider about the security of its information centre. In the past, people used locks and keys to limit access to secure areas. However, keys can be stolen or copied easily. Therefore, card-key are designed to prevent the situation above. Three types of card-keys are commonly used by banks, computer centers and government departments. Each of this card-keys can employ an identifying number or password that is encoded in the card itself, and all are produced by techniques beyond the reach of the average computer criminals. One of the three card-key is called watermark magnetic. It was inspired by the watermarks on paper currency. The card's magnetic strip have a 12-digit number code and it cannot be copied. It can store about two thousand bits in the magnetic strip. The other two cards have the capability of storing thousands of times of data in the magnetic strip. They are optical memory cards (OMCs) and Smart cards. Both of them are always used in the security system of computers. However, it is not enough for just using password system and card-keys to protect the memory in the computer. A computer system also need to have a restricting program to verify the identity of the users. Generally, identity can be established by something a person knows, such as a password or something a person has, such as a card-key. However, people are often forget their passwords or lose their keys. A third method must be used. It is using something a person has --- physical trait of a human being. We can use a new technology called biometric device to identify the person who wants to use your computer. Biometric devices are instrument that perform mathematical analyses of biological characteristics. For example, voices, fingerprint and geometry of the hand can be used for identification. Nowadays, many computer centers, bank vaults, military installations and other sensitive areas have considered to use biometric security system. It is because the rate of mistaken acceptance of outsiders and the rejection of authorized insiders is extremely low. Individuality of vocal signature is one kind of biometric security system. The main point of this system is voice verification. The voice verifier described here is a developmental system at American Telephone and Telegraph. Only one thing that people need to do is repeating a particular phrase several times. The computer would sample, digitize and store what you said. After that, it will built up a voice signature and make allowances for an individual's characteristic variations. The theory of voice verification is very simple. It is using the characteristics of a voice: its acoustic strength. To isolate personal characteristics within these fluctuations, the computer breaks the sound into its component frequencies and analyzes how they are distributed. If someone wants to steal some information from your computer, the person needs to have a same voice as you and it is impossible. Besides using voices for identification, we can use fingerprint to verify a person's identity because no two fingerprints are exactly alike. In a fingerprint verification system, the user places one finger on a glass plate; light flashes inside the machine, reflects off the fingerprint and is picked up by an optical scanner. The scanner transmits the information to the computer for analysis. After that, security experts can verify the identity of that person by those information. Finally, the last biometric security system is the geometry of the hand. In that system, the computer system uses a sophisticated scanning device to record the measurements of each person's hand. With an overhead light shining down on the hand, a sensor underneath the plate scans the fingers through the glass slots, recording light intensity from the fingertips to the webbing where the fingers join the palm. After passing the investigation of the computer, people can use the computer or retrieve data from the computer. Although a lot of security system have invented in our world, they are useless if people always think that stealing information is not a serious crime. Therefore, people need to pay more attention on computer crime and fight against those hackers, instead of using a lot of computer security systems to protect the computer. Why do we need to protect our computers ? It is a question which people always ask in 18th century. However, every person knows the importance and useful of a computer security system. In 19th century, computer become more and more important and helpful. You can input a large amount of information or data in a small memory chip of a personal computer. The hard disk of a computer system is liked a bank. It contained a lot of costly material. Such as your diary, the financial situation of a trading company or some secret military information. Therefore, it just like hire some security guards to protect the bank. A computer security system can use to prevent the outflow of the information in the national defense industry or the personal diary in your computer. Nevertheless, there is the price that one might expect to pay for the tool of security: equipment ranging from locks on doors to computerized gate-keepers that stand watch against hackers, special software that prevents employees to steal the data from the company's computer. The bill can range from hundreds of dollars to many millions, depending on the degree of assurance sought. Although it needs to spend a lot of money to create a computer security system, it worth to make it. It is because the data in a computer can be easily erased or destroyed by a lot of kind of hazards. For example, a power supply problem or a fire accident can destroy all the data in a computer company. In 1987, a computer centre inside the Pentagon, the US military's sprawling head quarters near Washington, DC., a 300-Watt light bulb once was left burning inside a vault where computer tapes were stored. After a time, the bulb had generated so much heat that the ceiling began to smelt. When the door was opened, air rushing into the room brought the fire to life. Before the flames could be extinguished, they had spread consume three computer systems worth a total of $6.3 million. Besides those accidental hazards, human is a great cause of the outflows of data from the computer. There have two kind of people can go in the security system and steal the data from it. One is those trusted employee who is designed to let in the computer system, such as programmers, operators or managers. Another kind is those youth amateurs who experiment at night ----the hackers. Let's talk about those trusted workers. They are the groups who can easily become a criminal directly or indirectly. They may steal the information in the system and sell it to someone else for a great profit. In another hand, they may be bribed by someone who want to steal the data. It is because it may cost a criminal far less in time and money to bride a disloyal employee to crack the security system. Beside those disloyal workers, hacker is also very dangerous. The term "hacker" is originated at M.I.T. as students' jargon for classmates who doing computer lab in the night. In the beginning, hackers are not so dangerous at all. They just stole some hints for the test in the university. However, in early 1980s, hacker became a group of criminal who steal information from other commercial companies or government departments. What can we use to protect the computer ? We have talked about the reasons of the use of computer security system. But what kind of tools can we use to protect the computer. The most common one is a password system. Password are a multi-user computer system's which usual used for the first line of defense against intrusion. A password may be any combination of alphabetic and numeric characters, to maximum lengths set by the e particular system. Most system can accommodate passwords up to 40 characters. However, a long passwords can be easily forget. So, people may write it down and it immediately make a security risk. Some people may use their first name or a significant word. With a dictionary of 2000 common names, for instance, a experienced hacker can crack it within ten minutes. Besides the password system, card-keys are also commonly used. Each kind of card-keys can employ an identifying number or password that is encoded in the card itself, and all are produced by techniques beyond the reach of the average computer criminal. Three types of card usually used. They are magnetic watermark, Optical memory card and Smart card. However, both of the tools can be easily knew or stole by other people. Password are often forgotten by the users and card-key can be copied or stolen. Therefore, we need to have a higher level of computer security system. Biometric device is the one which have a safer protection for the computer. It can reduce the probability of the mistaken acceptance of outsider to extremely low. Biometric devices are instrument that perform mathematical analyses of biological characteristics. However, the time required to pass the system should not be too long. Also, it should not give inconvenience to the user. For example, the system require people to remove their shoes and socks for footprint verification. Individuality of vocal signature is one kind of biometry security system. They are still in the experimental stage, reliable computer systems for voice verification would be useful for both on-site and remote user identification. The voice verifier described here is invented by the developmental system at American Telephone and Telegraph. Enrollment would require the user to repeat a particular phrase several times. The computer would sample, digitize and store each reading of the phrase and then, from the data, build a voice signature that would make allowances for an individual's characteristic variations. Another biometric device is a device which can measuring the act of writing. The device included a biometric pen and a sensor pad. The pen can converts a signature into a set of three electrical signals by one pressure sensor and two acceleration sensors. The pressure sensor can change in the writer's downward pressure on the pen point. The two acceleration sensor can measure the vertical and horizontal movement. The third device which we want to talk about is a device which can scan the pattern in the eyes. This device is using an infrared beam which can scan the retina in a circular path. The detector in the eyepiece of the device can measure the intensity of the light as it is reflected from different points. Because blood vessels do not absorb and reflect the same quantities of infrared as the surrounding tissue, the eyepiece sensor records the vessels as an intricate dark pattern against a lighter background. The device samples light intensity at 320 points around the path of the scan , producing a digital profile of the vessel pattern. The enrollment can take as little as 30 seconds and verification can be even faster. Therefore, user can pass the system quickly and the system can reject those hackers accurately. The last device that we want to discuss is a device which can map the intricacies of a fingerprint. In the verification system, the user places one finger on a glass plate; light flashes inside the machine ,reflect off the fingerprint and is picked up by an optical scanner. The scanner transmits the information to the computer for analysis. Although scientist have invented many kind of computer security systems, no combination of technologies promises unbreakable security. Experts in the field agree that someone with sufficient resources can crack almost any computer defense. Therefore, the most important thing is the conduct of the people. If everyone in this world have a good conduct and behavior, there is no need to use any complicated security system to protect the computer. ---------------- f:\12000 essays\technology & computers (295)\Computer Revolution.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Computer Revolution If I were to make a history book of the years from 1981 to 1996, I would put computers on the cover. Computers, you may ask?, Yes computers, because if there were suddenly no computers on the world, there would be total chaos. People could not; communicate, commute, make business transactions, purchase things, or do most things in their daily routine, because power plants use computers to control the production of electricity. Computers have evolved extreme rapidly in the past fifteen years. Ten years ago, all that you could do with a computer, was primarily make mathematical calculations and type documents, but doing that required typing in a series of complex codes that took a great deal of training to learn. Then the Apple computer company took this complex computer language and evolved it to a simpler system of computer language using words that made sense in their context. This system was called BASIC. BASIC was a major development in the computer industry, because it made computers accessible to the average American. This helped greatly in proving that computers were no longer just toys and they had a very useful purpose. Most people still felt the cost was too great for a glorified typewriter. Several years after they introduction of the BASIC system, Apple introduced a new line of computers called the Macintosh. These Macintosh computers were extreme easy to use, and were about the same price of a computer that used BASIC. Apple's business exploded with the Mac, Macintosh were put in schools and millions of homes proving that the computer was an extreme useful tool after all. The Macintosh made such an impact on the computer industry that IBM and Microsoft joined forces to produce the MS-DOS system. The MS-DOS system was the basis of the Windows program, which made Bill Gates the multi-billionaire that he is. With windows and the Apple system, the modem which had been around for several years, could be used for its full potential. Instead of linking one computer to another, now millions of computers could be linked with massive main frames from on-line services such as; American On-Line or Prodigy. People finally had full affordable access to the World-Wide-Web and could communicate with people across the street or across the world. The internet is used by millions of people across the world each day for a vast variety of reasons from getting help with homework, to reading a magazine, to getting business information such as stock quotes, to planning a trip and making reservations, to send and receive e-mail, or even listen to music or watch a video clip. Businesses would come to a grinding halt if their computers suddenly stopped. Business men would not be able to communicate with one another, because they could not use phones, pagers, cellular phones, fax machines, or e-mail. People could not write documents because many offices do not even have a typewriter. Business is not the only aspect of our lives that is effect by computers, we could not by things at the store, use most appliances in our homes, drive our cars, or even get electricity, because the power commands use computers to control the flow electricity. In the future, computers will play an even more important role in our lives. Computers will link the citizens of this country with the government, some day making it possible for the citizens to directly vote on each bill that is up for vote. This will make America the first true democracy since the Greeks. f:\12000 essays\technology & computers (295)\Computer Security 2.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Chapter # 5 1 - Define encryption and explain how it is used to protect transmission of information. Encryption is method of scrambling data in some manner during transmission. In periods of war, the use of encryption becomes paramount so those messages are not intercepted by the opposing forces. There are a number of different ways to protect data during transmission, such as Substitution (character for character replacement) in which one unit (usually character) of cipher text (unintelligible text or signals produced through an encryption system) is substituted for a corresponding unit of plain text (the intelligible text or signals that can be read without using decryption), according to the algorithm in use and the specific key. The other method is Transposition (rearrangement characters) which is the encryption process in which units of the original plain text (usually individual characteristics) are simply moved around; they appear unchanged in the cipher text for their relative location. Study Case (Bank of Shenandoah Valley) While both encryption and authentication method are providing some measures of security, the implementation of security itself has totally a different approach. Before any methods chosen, the two most important factors in security implementations are having to be determined. The level of security needed and the cost involved, so the appropriate steps can be taken to ensure a safe and secure environment. In this case Bank of Shenandoah Valley is in type of business which a high level of security is required, therefore, I would suggest the use of encryption method with a complex algorithm involved. Although an authentication method is a secure method as well, is not as complex as encryption method of complex algorithm since it has been used in military during the war where a high levels of security are a must. During the war, the use of encryption becomes paramount so those messages are not intercepted by the opposing forces. This is a perfect example of how reliable an encrypted message can be while used within its appropriates guidelines. Chapter # 6 4- Describe the three different database models - hierarchical, relational and network. For data to be effectively transformed into useful information, it must be organized in a logical, meaningful way. Data is generally organized in a hierarchy that starts with the smallest unit (or piece of data) used by the computer and then progresses into the database, which holds all the information about the topic. The data is organized in a top - down or inverted tree likes structure. At the top of every tree or hierarchy is the root segment or element of the tree that corresponds to the main record type. The hierarchical model is best suited to situations in which the logical relationship between data can be properly presented with the one parent many children (one to many) approach. In a hierarchical database, all relationships are one - to -one or one- to - many, but no group of data can be on the "many" side of more than one relationship. Network Database is a database in which all types of relationships are allowed. The network database is an extension of the hierarchical model, where the various levels of one-to-many relationships are replaced with owner-member relationships in which a member may have many owners. In a network database structure, more that one path can often be used to access data. "Databases structured according to either the hierarchical model or the network model suffers from the same deficiency: once the relationships are established between the data elements, it is difficult to modify them or to create new relationships. Relational Database describes data using a standard tabular format in which all data elements are placed in two-dimensional tables that are the logical equivalent of files. In relational databases, data are accessed by content rather than by address (in contrast with hierarchical and network databases). Relational databases locate data logically, rather than physically. A relational database has no predetermined relationship between the data such as one-to-many sets or one-to-one. Case study ( D'Angelo Transportation, Inc.) There are a number of factor which ought to be discussed during discussion: Ĝ How much of the system should by computerized? Ĝ Should we purchase software or build based on what we are using in the current system. ( make versus buy analysis) Ĝ If we decide to make the new system, should we design an on-line or batch system? Ĝ Should we design the system for a mainframe computer, minicomputer, microcomputers or some combinations? Ĝ What information technologies might be useful for this application? Some of the security issues, are consist of the level of security required and the cost involved in this conversion. A database system is vulnerable to criminal attack at many levels. Typically, it is the end user rather the programmer who is often (but not always) guilty of the simple misuse of applications. Thus, it is essential that the total system is secure. The two classifications of security violations are malicious or accidental. One of the most emphasized and significant factors of any program development is the early involvement of the end-users. This provides the programmer as well as the end-user with important functionality of the new system and help them to adapt to the new working environment more efficiently and effectively. The continuos training of the staff is essential in meeting the objectives of the organization since they will be provided with needed skills and expertise necessary to deal with daily issues using of new system. f:\12000 essays\technology & computers (295)\Computer Security 3.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ As defined in Computer Security Basics by O'Reilly & Associates, Inc. Biometrics is the use of a persons unique physiological, behavioral, and morphological characteristics to provide positive personal identification. Biometric systems that are currently avaiable today examine fingerprints, handprints, and retina patterns. Systems that are close to biometrics but are not classified as such are behavioral systems such as voice, signature and keystroke systems. They test patterns of behavior not parts of the body. It seems that in the world of biometrics that the more effective the device, the less willing people will be to accept it. Retina pattern devices are the most reliable but most people hate the idea of a laser shooting into their eye. Yet something such as monitoring keystroke patters people don't mind, but it's not nearly as effective. Biometric verification is forecast to be a multibillion dollar market in this decade. There is no doubt that financial credit and debit cards are going to be the biggest part of the biometric market. There are also many significant niche markets which are growing rapidly. For example, biometric identification cards are being used at a university in Georgia to allow students to get their meals, and in a Maryland day care center to ensure that the right person picks up the right child. In Los Angeles, they are using fingerprints to stop welfare fraud. And they're also being used by frequent business travellers for rapid transit through immigration and customs in Holland, and now at JFK and Newark airports in the United States. It could also be used to simply prevent one employee from "punching in" for some one else, or to prevent someone from opening up an account at a bank using a false name. Then there is also the security access market, access to computer databases, to premises and a variety of other areas. The Sentry program made by Fingerprint Technologies uses several devices at once. The system first prompts for a user name and password. Then they must have their fingerprint scan match what is on record. They can also use a video camera for real time video to capture photographs which can be incorporated into the data base. The time to scan and gain entrance to the building take from 6 to 10 seconds depending on what other information the operator wishes the user to enter. The system also keeps on record three of the individuals finger patterns incase one of the others is injured. Biometrics is still relatively new to most people and will remain expensive to purchase good equipment until it becomes more popular and the technology gets better. And as people become more aware of how the systems work they will become more accepting of the more secure systems and not shy away from them as much. The future of access control security is literally in the hands, eyes, voice, keystroke, and signature of everyone. as much f:\12000 essays\technology & computers (295)\COMPUTER SECURITY ANALYSIS.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ =================================== =INTRODUCTION TO DENIAL OF SERVICE= =================================== Brian -------- Bri000001@aol.com Last updated: Friday, March 28, 1997 10:19:23 AM 10:19:23 .0. FOREWORD .A. INTRODUCTION .A.1. WHAT IS A DENIAL OF SERVICE ATTACK? .A.2. WHY WOULD SOMEONE CRASH A SYSTEM? .A.2.1. INTRODUCTION .A.2.2. SUB-CULTURAL STATUS .A.2.3. TO GAIN ACCESS .A.2.4. REVENGE .A.2.5. POLITICAL REASONS .A.2.6. ECONOMICAL REASONS .A.2.7. NASTINESS .A.3. ARE SOME OPERATING SYSTEMS MORE SECURE? .B. SOME BASIC TARGETS FOR AN ATTACK .B.1. SWAP SPACE .B.2. BANDWIDTH .B.3. KERNEL TABLES .B.4. RAM .B.5. DISKS .B.6. CACHES .B.7. INETD .C. ATTACKING FROM THE OUTSIDE .C.1. TAKING ADVANTAGE OF FINGER .C.2. UDP AND SUNOS 4.1.3. .C.3. FREEZING UP X-WINDOWS .C.4. MALICIOUS USE OF UDP SERVICES .C.5. ATTACKING WITH LYNX CLIENTS .C.6. MALICIOUS USE OF telnet .C.7. MALICIOUS USE OF telnet UNDER SOLARIS 2.4 .C.8. HOW TO DISABLE ACCOUNTS .C.9. LINUX AND TCP TIME, DAYTIME .C.10. HOW TO DISABLE SERVICES .C.11. PARAGON OS BETA R1.4 .C.12. NOVELLS NETWARE FTP .C.13. ICMP REDIRECT ATTACKS .C.14. BROADCAST STORMS .C.15. EMAIL BOMBING AND SPAMMING .C.16. TIME AND KERBEROS .C.17. THE DOT DOT BUG .C.18. SUNOS KERNEL PANIC .C.19. HOSTILE APPLETS .C.20. VIRUS .C.21. ANONYMOUS FTP ABUSE .C.22. SYN FLOODING .C.23. PING FLOODING .C.24. CRASHING SYSTEMS WITH PING FROM WINDOWS 95 MACHINES .C.25. MALICIOUS USE OF SUBNET MASK REPLY MESSAGE .C.26. FLEXlm .C.27. BOOTING WITH TRIVIAL FTP .D. ATTACKING FROM THE INSIDE .D.1. KERNEL PANIC UNDER SOLARIS 2.3 .D.2. CRASHING THE X-SERVER .D.3. FILLING UP THE HARD DISK .D.4. MALICIOUS USE OF eval .D.5. MALICIOUS USE OF fork() .D.6. CREATING FILES THAT IS HARD TO REMOVE .D.7. DIRECTORY NAME LOOKUPCACHE .D.8. CSH ATTACK .D.9. CREATING FILES IN /tmp .D.10. USING RESOLV_HOST_CONF .D.11. SUN 4.X AND BACKGROUND JOBS .D.12. CRASHING DG/UX WITH ULIMIT .D.13. NETTUNE AND HP-UX .D.14. SOLARIS 2.X AND NFS .D.15. SYSTEM STABILITY COMPROMISE VIA MOUNT_UNION .D.16. trap_mon CAUSES KERNEL PANIC UNDER SUNOS 4.1.X .E. DUMPING CORE .E.1. SHORT COMMENT .E.2. MALICIOUS USE OF NETSCAPE .E.3. CORE DUMPED UNDER WUFTPD .E.4. ld UNDER SOLARIS/X86 .F. HOW DO I PROTECT A SYSTEM AGAINST DENIAL OF SERVICE ATTACKS? .F.1. BASIC SECURITY PROTECTION .F.1.1. INTRODUCTION .F.1.2. PORT SCANNING .F.1.3. CHECK THE OUTSIDE ATTACKS DESCRIBED IN THIS PAPER .F.1.4. CHECK THE INSIDE ATTACKS DESCRIBED IN THIS PAPER .F.1.5. EXTRA SECURITY SYSTEMS .F.1.6. MONITORING SECURITY .F.1.7. KEEPING UP TO DATE .F.1.8. READ SOMETHING BETTER .F.2. MONITORING PERFORMANCE .F.2.1. INTRODUCTION .F.2.2. COMMANDS AND SERVICES .F.2.3. PROGRAMS .F.2.4. ACCOUNTING .G. SUGGESTED READING .G.1. INFORMATION FOR DEEPER KNOWLEDGE .G.2. KEEPING UP TO DATE INFORMATION .G.3. BASIC INFORMATION .H. COPYRIGHT .I. DISCLAIMER .0. FOREWORD ------------ In this paper I have tried to answer the following questions: - What is a denial of service attack? - Why would someone crash a system? - How can someone crash a system. - How do I protect a system against denial of service attacks? I also have a section called SUGGESTED READING were you can find information about good free information that can give you a deeper understanding about something. Note that I have a very limited experience with Macintosh, OS/2 and Windows and most of the material are therefore for Unix use. You can always find the latest version at the following address: http://www.student.tdb.uu.se/~t95hhu/secure/denial/DENIAL.TXT Feel free to send comments, tips and so on to address: t95hhu@student.tdb.uu.se .A. INTRODUCTION ~~~~~~~~~~~~~~~~ .A.1. WHAT IS A DENIAL OF SERVICE ATTACK? ----------------------------------------- Denial of service is about without permission knocking off services, for example through crashing the whole system. This kind of attacks are easy to launch and it is hard to protect a system against them. The basic problem is that Unix assumes that users on the system or on other systems will be well behaved. .A.2. WHY WOULD SOMEONE CRASH A SYSTEM? --------------------------------------- .A.2.1. INTRODUCTION -------------------- Why would someone crash a system? I can think of several reasons that I have presentated more precisely in a section for each reason, but for short: .1. Sub-cultural status. .2. To gain access. .3. Revenge. .4. Political reasons. .5. Economical reasons. .6. Nastiness. I think that number one and six are the more common today, but that number four and five will be the more common ones in the future. .A.2.2. SUB-CULTURAL STATUS --------------------------- After all information about syn flooding a bunch of such attacks were launched around Sweden. The very most of these attacks were not a part of a IP-spoof attack, it was "only" a denial of service attack. Why? I think that hackers attack systems as a sub-cultural pseudo career and I think that many denial of service attacks, and here in the example syn flooding, were performed for these reasons. I also think that many hackers begin their carrer with denial of service attacks. .A.2.3. TO GAIN ACCESS ---------------------- Sometimes could a denial of service attack be a part of an attack to gain access at a system. At the moment I can think of these reasons and specific holes: .1. Some older X-lock versions could be crashed with a method from the denial of service family leaving the system open. Physical access was needed to use the work space after. .2. Syn flooding could be a part of a IP-spoof attack method. .3. Some program systems could have holes under the startup, that could be used to gain root, for example SSH (secure shell). .4. Under an attack it could be usable to crash other machines in the network or to deny certain persons the ability to access the system. .5. Also could a system being booted sometimes be subverted, especially rarp-boots. If we know which port the machine listen to (69 could be a good guess) under the boot we can send false packets to it and almost totally control the boot. .A.2.4. REVENGE --------------- A denial of service attack could be a part of a revenge against a user or an administrator. .A.2.5. POLITICAL REASONS ------------------------- Sooner or later will new or old organizations understand the potential of destroying computer systems and find tools to do it. For example imaginate the Bank A loaning company B money to build a factory threating the environment. The organization C therefor crash A:s computer system, maybe with help from an employee. The attack could cost A a great deal of money if the timing is right. .A.2.6. ECONOMICAL REASONS -------------------------- Imaginate the small company A moving into a business totally dominated by company B. A and B customers make the orders by computers and depends heavily on that the order is done in a specific time (A and B could be stock trading companies). If A and B can't perform the order the customers lose money and change company. As a part of a business strategy A pays a computer expert a sum of money to get him to crash B:s computer systems a number of times. A year later A is the dominating company. .A.2.7. NASTINESS ----------------- I know a person that found a workstation where the user had forgotten to logout. He sat down and wrote a program that made a kill -9 -1 at a random time at least 30 minutes after the login time and placed a call to the program from the profile file. That is nastiness. .A.3. ARE SOME OPERATING SYSTEMS MORE SECURE? --------------------------------------------- This is a hard question to answer and I don't think that it will give anything to compare different Unix platforms. You can't say that one Unix is more secure against denial of service, it is all up to the administrator. A comparison between Windows 95 and NT on one side and Unix on the other could however be interesting. Unix systems are much more complex and have hundreds of built in programs, services... This always open up many ways to crash the system from the inside. In the normal Windows NT and 95 network were is few ways to crash the system. Although were is methods that always will work. That gives us that no big different between Microsoft and Unix can be seen regardning the inside attacks. But there is a couple of points left: - Unix have much more tools and programs to discover an attack and monitoring the users. To watch what another user is up to under windows is very hard. - The average Unix administrator probably also have much more experience than the average Microsoft administrator. The two last points gives that Unix is more secure against inside denial of service attacks. A comparison between Microsoft and Unix regarding outside attacks are much more difficult. However I would like to say that the average Microsoft system on the Internet are more secure against outside attacks, because they normally have much less services. .B. SOME BASIC TARGETS FOR AN ATTACK ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .B.1. SWAP SPACE ---------------- Most systems have several hundred Mbytes of swap space to service client requests. The swap space is typical used for forked child processes which have a short life time. The swap space will therefore almost never in a normal cause be used heavily. A denial of service could be based on a method that tries to fill up the swap space. .B.2. BANDWIDTH --------------- If the bandwidth is to high the network will be useless. Most denial of service attack influence the bandwidth in some way. .B.3. KERNEL TABLES ------------------- It is trivial to overflow the kernel tables which will cause serious problems on the system. Systems with write through caches and small write buffers is especially sensitive. Kernel memory allocation is also a target that is sensitive. The kernel have a kernelmap limit, if the system reach this limit it can not allocate more kernel memory and must be rebooted. The kernel memory is not only used for RAM, CPU:s, screens and so on, it it also used for ordinaries processes. Meaning that any system can be crashed and with a mean (or in some sense good) algorithm pretty fast. For Solaris 2.X it is measured and reported with the sar command how much kernel memory the system is using, but for SunOS 4.X there is no such command. Meaning that under SunOS 4.X you don't even can get a warning. If you do use So f:\12000 essays\technology & computers (295)\COMPUTER SECURITY.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The book Computer Security written by Time Life Books, explains what computer security is, how it works, and how it affects peoples lives. Without computer security peoples private information can be stolen right off their computer. Computer security is exactly what it sounds like. It is security on a computer to prevent people from accessing a computer. It is very difficult to secure a computer for a computer is like a mechanical human brain and if one has a real human brain they can operate it. In other words if one knows how a computer works they can make it work. If they do not then they cannot do anything at all. This is what computer security is meant to do. It is done by making it only possible for someone to access a computer by using a password or by locking it up. Computer security works by many ways of password use or by locking it up. The password method is enforced by prompting a computer user to enter a password before they can access any programs or information already contained within the computer. Another password security method would be to have the computer user carry a digital screen that fits in your pocket. This digital screen receives an encrypted message and displays numbers that change every few minutes. These numbers make the password one needs for the next few minutes in order to access the computer. This password method is somewhat new. It is also better, for the previous password method is not totally fool proof. This is because the passwords are stored in the computer and if a computer literate person was to access this information, they could get into a computer and enter it looking as if they were someone else for they have obtained someone else's password. In the future when technology increasingly gets better there are possibilities that a computer could use you hand, inner eye, or voice print. Currently these methods are not used but they are right around the corner. The only other way to absolutely secure a computer would be to lock it up but with computer networks these days that cannot be easily done unless dealing with a single personal computer. Computer security has become such a big issue due to the huge amount of loss in profits by businesses. This happens because the computers were not totally secure from unwanted visitors. An example of how this could happen is a big business was to spend millions of dollars on research and development that would only stolen by another to profit themselves. Businesses lose three hundred million to about five billion dollars yearly due to these computer criminals called hackers or computer information thieves. Computer security has come a long way since the creation of computers. But still we cannot fully believe our computers are fully secure and safe from unwanted snoopers. Computer security is very hard to create presently for anyone can get in if they know what to do. Right now what to do is easy but if you need someone's hand to get in a computer that may be a little more difficult. f:\12000 essays\technology & computers (295)\Computer Secutity 4.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Viruses: Past, Present And Future In our health-conscious society, viruses of any type are an enemy. Computer viruses are especially pernicious. They can and do strike any unprotected computer system, with results that range from merely annoying to the disastrous, time-consuming and expensive loss of software and data. And with corporations increasingly using computers for enterprise-wide, business-critical computing, the costs of virus-induced down-time are growing along with the threat from viruses themselves. Concern is justified - but unbridled paranoia is not. Just as proper diet, exercise and preventative health care can add years to your life, prudent and cost-effective anti-virus strategies can minimize your exposure to computer viruses. · A history of computer viruses · Who writes viruses - and how they can reach you · The early warning symptoms of virus infection · The real numbers behind the growth of viruses and their costs · How viruses work - and how virus protection can stop them What, Exactly, Is A Computer Virus? A computer virus is a program designed to replicate and spread, generally with the victim being oblivious to its existence. Computer viruses spread by attaching themselves to other programs (e.g., word processors or spreadsheets application files) or to the boot sector of a disk. When an infected file is activated - or executed - or when the computer is started from an infected disk, the virus itself is also executed. Often, it lurks in computer memory, waiting to infect the next program that is activated, or the next disk that is accessed. What makes viruses dangerous is their ability to perform an event. While some events are harmless (e.g. displaying a message on a certain date) and others annoying (e.g., slowing performance or altering the screen display), some viruses can be catastrophic by damaging files, destroying data and crashing systems. How Do Infections Spread? Viruses come from a variety of sources. Because a virus is software code, it can be transmitted along with any legitimate software that enters your environment: · In a 1991 study of major U.S. and Canadian computer users by the market research firm Dataquest for the National Computer Security Association, most users blamed an infected diskette (87 percent). Forty-three percent of the diskettes responsible for introducing a virus into a corporate computing environment were brought from home. · Nearly three-quarters (71 percent) of infections occurred in a networked environment, making rapid spread a serious risk. With networking, enterprise computing and inter-organizational communications on the increase, infection during telecommunicating and networking is growing. · Seven percent said they had acquired their virus while downloading software from an electronic bulletin board service. · Other sources of infected diskettes included demo disks, diagnostic disks used by service technicians and shrink-wrapped software disks - contributing six percent of reported infections. What Damage Can Viruses Do To My System? As mentioned earlier, some viruses are merely annoying, others are disastrous. At the very least, viruses expand file size and slow real-time interaction, hindering performance of your machine. Many virus writers seek only to infect systems, not to damage them - so their viruses do not inflict intentional harm. However, because viruses are often flawed, even benign viruses can inadvertently interact with other software or hardware and slow or stop the system. Other viruses are more dangerous. They can continually modify or destroy data, intercept input/output devices, overwrite files and reformat hard disks. What Are The Symptoms Of Virus Infection? Viruses remain free to proliferate only as long as they exist undetected. Accordingly, the most common viruses give off no symptoms of their infection. Anti-virus tools are necessary to identify these infections. However, many viruses are flawed and do provide some tip-offs to their infection. Here are some indications to watch for: · Changes in the length of programs · Changes in the file date or time stamp · Longer program load times · Slower system operation · Reduced memory or disk space · Bad sectors on your floppy · Unusual error messages · Unusual screen activity · Failed program execution · Failed system bootups when booting or accidentally booting from the A: drive. · Unexpected writes to a drive. The Virus Threat: Common - And Growing How real is the threat from computer viruses? Every large corporation and organization has experienced a virus infection - most experience them monthly. According to data from IBM's High Integrity Computing Laboratory, corporations with 1,000 PCs or more now experience a virus attack every two to three months - and that frequency will likely double in a year. The market research firm Dataquest concludes that virus infection is growing exponentially. It found nearly two thirds (63%) of survey respondents had experienced a virus incident (affecting 25 or fewer machines) at least once, with nine percent reporting a disaster affecting more than 25 PCs. The 1994 Computer Crime Survey by Creative Strategies Research International and BBS Systems of San Francisco found 76 percent of U.S. respondents had experienced infection in 1993 alone. If you have only recently become conscious of the computer virus epidemic, you are not alone. Virus infections became a noticeable problem to computer users only around 1990 - but it has grown rapidly since then. According to a study by Certus International of 2,500 large U.S. sites with 400 or more PCs, the rate of infection grew by 600 percent from 1994 to 1995. More Viruses Mean More Infections Virus infections are a growing problem, in part, because there are more strains of viruses than ever before. In 1986, there were just four PC viruses. New viruses were a rarity, with a virus strain created once every three months. By 1989, a new virus appeared every week. By 1990, the rate rose to once every two days. Now, more than three viruses are created every day - for an average 110 new viruses created in a typical month. From those modest four viruses in 1986, today's computer users face thousands of virus strains. Number Of Unique Viruses Here is the frightening part: Most infections today are caused by viruses that are at least six years old. That is, the infections are caused by viruses created no later than 1990, when there were approximately 300 known viruses. Today, there are thousands of viruses. If that pattern of incubation holds, the explosion of new viruses over the past few years could result in another explosion in total infections over the next few years. The History Of Viruses: How It All Began Today, the existence of viruses and the need to protect against them are inevitable realities. But it wasn't always so. As recently as the middle 1980s, computer viruses didn't exist. The first viruses were created in university labs - to demonstrate the"potential" threat that such software code could provide. By 1987, viruses began showing up at several universities around the world. Three of the most common of today's viruses - Stoned, Cascade and Friday the 13th - first appeared that year. Serious outbreaks of some of these viruses began to appear over the next two years. The Datacrime and Friday the 13th viruses became major media events, presaging the concern that would later surround the Michelangelo virus. Perhaps surprisingly, tiny Bulgaria became known as the world's Virus Factory in 1990 because of the high number of viruses created there. The NCSA found that Bulgaria, home of the notorious Dark Avenger, originated 76 viruses that year, making it the world's single largest virus contributor. Analysts attribute Bulgaria's prolific virus output to an abundance of trained but unemployed programmers; with nothing to do, these people tried their hands at virus production, with unfortunately successful results. This growing activity convinced the computer industry that viruses were serious threats requiring defensive action. IBM created its High Integrity Computing Laboratory to lead Big Blue's anti-virus research effort. Symantec began offering Symantec Anti-Virus, one of the first commercially available virus defenses. These responses came none too soon. By 1991, the first polymorphic viruses - that can, like the AIDS virus in humans, change their shape to elude detection - began to spread and attack in significant numbers. That year too, the total number of viruses began to swell, topping 1,000 for the first time. Virus creation proliferated, and continues to accelerate, because of the growing population of intelligent, computer-literate young people who appreciate the challenge - but not the ethics - of writing and releasing new viruses. Cultural factors also play a role. The U.S. - with its large and growing population of computer-literate young people - is the second largest source of infection. Elsewhere, Germany and Taiwan are the other major contributors of new viruses. Another reason for the rapid rise of new viruses is that virus creation is getting easier. The same technology that makes it easier to create legitimate software - Windows-based development tools, for example - is, unfortunately, being applied to virus creation. The so-called Mutation Engine appeared in 1992, facilitating the development of polymorphic viruses. In 1992, the Virus Creation Laboratory, featuring on-line help and pull-down menus, brought virus creation within the reach of even non-sophisticated computer users. More PCs And Networks Mean More Infections, Too The growing number of PCs, PC-based networks and businesses relying on PCs are another set of reasons for rising infections: there are more potential victims. For example, in the decade since the invention and popularization of the PC, the installed base of active PCs grew to 54 million by 1990. But that number has already more than doubled (to 112 million PCs in 1993) and climbed to 154 million in 1994. Not only are PCs becoming more common - they are taking over a rising share of corporate computing duties. A range of networking technologies - including Novell NetWare, Microsoft Windows NT and LAN Manager, LAN Server, OS/2 and Banyan VINES - are allowing companies to downsize from mainframe-based computer systems to PC-based LANs and, now, client-server systems. These systems are more cost-effective and they are being deployed more broadly within organizations for a growing range of mission-critical applications, from finance and sales data to inventory control, purchasing and manufacturing process control. The current, rapid adoption of client-server computing by business gives viruses fertile new ground for infection. These server-based solutions are precisely the type of computers that are susceptible - if unprotected - to most computer viruses. And because data exchange is the very reason for using client-server solutions, a virus on one PC in the enterprise is far more likely to communicate with - and infect - more PCs and servers than would have been true a few years ago. Moreover, client-server computing is putting PCs in the hands of many first-time or relatively inexperienced computer users, who are less likely to understand the virus problem. The increased use of portable PCs, remote link-ups to servers and inter-organization-and inter-network e-mail all add to the risk of infections, too. Once a virus infects a single networked computer, the average time required to infect another workstation is from 10 to 20 minutes - meaning a virus can paralyze an entire enterprise in a few hours. What Is Ahead? The industry's latest buzz-phrase is "data superhighway" and, although most people haven't thought about those superhighways in the context of virus infections, they should. Any technology that increases communication among computers also increases the likelihood of infection. And the data superhighway promises to expand on today's Internet links with high-bandwidth transmission of dense digital video, voice and data traffic at increasingly cost-effective rates. Corporations, universities, government agencies, non-profit organizations and consumers will be exchanging far more data than ever before. That makes virus protection more important, as well. In addition to more opportunities for infection, there'll be more and more-damaging strains of virus to do the infecting. Regardless of the exact number of viruses that appear in the next few years, the Mutation Engine, Virus Creation Laboratory and other virus construction kits are sure to boost the virus population. Viruses that combine the worst features of several virus types - such as polymorphic boot sector viruses - are appearing and will become more common. Already, Windows-specific viruses have appeared. Virus writers, and their creations, are getting smarter. In response to the explosion in virus types and opportunities for transmission, virus protection will have to expand, too. Computer anti-virus program manufacturers had a speed bump in which many used to profit: 32-bit applications. DOS and Windows 3.1 used a 16-bit architecture, and other 32-bit platforms such as Windows NT, UNIX, and a variety of other server operating systems had anti-virus programs already made. McAfee and Symantec, two giants in the anti-virus industry, prepared for the release of a new 32-bit home operating system. In August, Microsoft released Windows 95 for resale and it stormed across the nation. A large number of virus problems surfaced in the short months after the release. This was due to the neglect of a readily-available 32-bit anti-virus for the home user, and the fact that old 16-bit anti-virus programs could not detect 32-bit viruses. McAfee introduced Virus Scan 95 and Symantec released Norton Antivirus 95 shortly after the Windows 95 release. As the future progresses and the data architecture increases, anti-virus programs will have to be upgraded to handle the new program structure. The Costs Of Virus Infection Computer viruses have cost companies worldwide nearly two billion dollars since 1990, with those costs accelerating, according to an analysis of survey data from IBM's High Integrity Computing Laboratory and Dataquest. Global viral costs are clmbed another 1.9 billion dollars in 1994 alone, but has been at a more steady rate as anti-virus programs have been improved significantly. The costs are so high because of the direct labor expense of cleanup for all infected hard disks and floppies in a typical incident. The indirect expense of lost productivity - an enormous sum - is higher, still. In a typical infection at a large corporate site, technical support personnel will have to inspect all 1,000 PCs. Since each PC user has an average 35 diskettes, about 35,000 diskettes will have to be scanned, too. Recovery Time For A Virus Disaster (25 PCs) On average, it took North American respondents to the 1991 Dataquest study four days to recover from a virus episode - and some MIS managers needed fully 30 days to recover. Even more ominously, their efforts were not wholly effective; a single infected floppy disk taken home during cleanup and later returned to the office can trigger a relapse. Some 25 percent of those experiencing a virus attack later suffered such a re-infection by the same virus within 30 days. That cleanup is costing each of these corporations an average $177,000 in 1993 - and that sum will grow to more than $254,000 in 1994. If you're in an enterprise with 1,000 or more PCs, you can use these figures to estimate your own virus-fighting costs. Take the cost-per-PC ($177 in 1993, $254 in 1994) and multiply it by the number of PCs in your organization. At a briefing before the U.S. Congress in 1993, NYNEX, one of North America's largest telecommunications companies, described its experience with virus infections · Since late 1989, the company had nearly 50 reported virus incidents - and believes it experienced another 50 unreported incidents. · The single user, single PC virus incident is the exception. More typical incidents involved 17 PCs and 50 disks at a time. In the case of a 3Com network, the visible signs of infection did not materialize until after 17 PCs were infected. The LAN was down for a week while the cleanup was conducted. · Even the costs of dealing with a so-called benign virus are high. A relatively innocuous Jerusalem-B virus had infected 10 executable files on a single system. Because the computer was connected to a token ring network, all computers in that domain had to be scanned for the virus. Four LAN administrators spent two days plus overtime, one technician spent nine hours, a security specialist spent five hours, and most of the 200 PC on the LAN had to endure 15-minute interruptions throughout a two-day period. In the October 1993 issue of Virus Bulletin, Micki Krause, Program Manager for Information Security at Rockwell International, outlined the cost of a recent virus outbreak at her corporation: • In late April 1993, the Hi virus was discovered at a large division of Rockwell located in the U.S. The division is heavily networked with nine file servers and 630 client PCs. The site is also connected to 64 other sites around the world (more than half of which are outside the U.S.). The virus had entered the division on program disks from a legitimate European business partner. One day after the disks arrived, the Hi virus was found by technicians on file servers, PCs and floppy disks. Despite eradication efforts, the virus continued to infect the network throughout the entire month of May. • 160 hours were spent by internal PC and LAN support personnel to identify and contain the infections. At $45.00 per hour, their efforts cost Rockwell $7,200. • Rockwell also hired an external consultant to assist Rockwell employees in the cleanup. 200 hours were spent by the consultant, resulting in a cost of $8,000. • One file server was disconnected from the LAN to prevent the virus from further propagating across the network. The server, used by approximately 100 employees, was down for an entire day. Rockwell estimated the cost of the downtime at $9,000 (100 users @ $45/hr for 8 hours, with users accessing the server, on average, 25% of the normal workday). • While some anti-virus software was in use, Rockwell purchased additional software for use on both the servers and the client PCs for an additional $19,800. • Total Cost of the virus incident at Rockwell was $44,000. Technical Overview Computer Viruses And How They Work Viruses are small software programs. At the very least, to be a virus, these programs must replicate themselves. They do this by exploiting computer code, already on the host system. The virus can infect, or become resident in almost any software component, including an application, operating system, system boot code or device driver. Viruses gain control over their host in various ways. Here is a closer look at the major virus types, how they function, and how you can fight them. File Viruses Most of the thousands of viruses known to exist are file viruses, including the Friday the 13th virus. They infect files by attaching themselves to a file, generally an executable file - the .EXE and .COM files that control applications and programs. The virus can insert its own code in any part of the file, provided it changes the hosts code, somewhere along the way, misdirecting proper program execution so that it executes the virus code first, rather than to the legitimate program. When the file is executed, the virus is executed first. Most file viruses store themselves in memory. There, they can easily monitor access calls to infect other programs as they're executed. A simple file virus will overwrite and destroy a host file, immediately alerting the user to a problem because the software will not run. Because these viruses are immediately felt, they have less opportunity to spread. More pernicious file viruses cause more subtle or delayed damage - and spread considerably before being detected. As users move to increasingly networked and client-server environments, file viruses are becoming more common. The challenge for users is to detect and clean this virus from memory, without having to reboot from a clean diskette. That task is complicated because file viruses can quickly infect a range of software components throughout a user's system. Also, the scan technique used to detect viruses can cause further infections; scans open files and file viruses can infect a file during that operation. File viruses such as the Hundred Years virus can infect data files too. Boot Sector/partition table viruses While there are only about 200 different boot sector viruses, they make up 75 percent of all virus infections. Boot sector viruses include Stoned, the most common virus of all time, and Michelangelo, perhaps the most notorious. These viruses are so prevalent because they are harder to detect, as they do not change a files size or slow performance, and are fairly invisible until their trigger event occurs - such as the reformatting of a hard disk. They also spread rapidly. The boot sector virus infects floppy disks and hard disks by inserting itself into the boot sector of the disk, which contains code that's executed during the system boot process. Booting from an infected floppy allows the virus to jump to the computer's hard disk. The virus executes first and gains control of the system boot even before MS-DOS is loaded. Because the virus executes before the operating system is loaded, it is not MS-DOS-specific and can infect any PC operating system platform - MS-DOS, Windows, OS/2, PC-NFS, or Windows NT. The virus goes into RAM, and infects every disk that is accessed until the computer is rebooted and the virus is removed from memory. Because these viruses are memory resident, they can be detected by running CHKDSK to view the amount of RAM and observe if the expected total has declined by a few kilobytes. Partition table viruses attack the hard disk partition table by moving it to a different sector and replacing the original partition table with its own infectious code. These viruses spread from the partition table to the boot sector of floppy disks as floppies are accessed. Multi-Partite Viruses These viruses combine the ugliest features of both file and boot sector/partition table viruses. They can infect any of these host software components. And while traditional boot sector viruses spread only from infected floppy boot disks, multi-partite viruses can spread with the ease of a file virus - but still insert an infection into a boot sector or partition table. This makes them particularly difficult to eradicate. Tequila is an example of a multi-partite virus. Trojan Horses Like its classical namesake, the Trojan Horse virus typically masquerades as something desirable - e.g., a legitimate software program. The Trojan Horse generally does not replicate (although researchers have discovered replicating Trojan Horses). It waits until its trigger event and then displays a message or destroys files or disks. Because it generally does not replicate, some researchers do not classify Trojan Horses as viruses - but that is of little comfort to the victims of these malicious stains of software. File Overwriters These viruses infect files by linking themselves to a program, keeping the original code intact and adding themselves to as many files as possible. Innocuous versions of file overwriters may not be intended to do anything more than replicate but, even then, they take up space and slow performance. And since file overwriters, like most other viruses, are often flawed, they can damage or destroy files inadvertently. The worst file overwriters remain hidden only until their trigger events. Then, they can deliberately destroy files and disks. Polymorphic viruses More and more of today's viruses are polymorphic in nature. The recently released Mutation Engine - which makes it easy for virus creators to transform simple viruses into polymorphic ones - ensures that polymorphic viruses will only proliferate over the next few years. Like the human AIDS virus that mutates frequently to escape detection by the body's defenses, the polymorphic computer virus likewise mutates to escape detection by anti-virus software that compares it to an inventory of known viruses. Code within the virus includes an encryption routine to help the virus hide from detection, plus a decryption routine to restore the virus to its original state when it executes. Polymorphic viruses can infect any type of host software; although polymorphic file viruses are most common, polymorphic boot sector viruses have already been discovered. Some polymorphic viruses have a relatively limited number of variants or disguises, making them easier to identify. The Whale virus, for example, has 32 forms. Anti-virus tools can detect these viruses by comparing them to an inventory of virus descriptions that allows for wildcard variations - much as PC users can search for half-remembered files in a directory by typing the first few letters plus an asterisk symbol. Polymorphic viruses derived from tools such as the Mutation Engine are tougher to identify, because they can take any of four billion forms. Stealth Viruses Stealth aircraft have special engineering that enables them to elude detection by normal radar. Stealth viruses have special engineering that enables them to elude detection by traditional anti-virus tools. The stealth virus adds itself to a file or boot sector but, when you examine the host software, it appears normal and unchanged. The stealth virus performs this trickery by lurking in memory when it's executed. There, it monitors and intercepts your system's MS-DOS calls. When the system seeks to open an infected file, the stealth virus races ahead, uninfects the file and allows MS-DOS to open it - all appears normal. When MS-DOS closes the file, the virus reverses these actions, reinfecting the file. Boot sector stealth viruses insinuate themselves in the system's boot sector and relocate the legitimate boot sector code to another part of the disk. When the system is booted, they retrieve the legitimate code and pass it along to accomplish the boot. When you examine the boot sector, it appears normal - but you are not seeing the boot sector in its normal location. Stealth viruses take up space, slow system performance, and can inadvertently or deliberately destroy data and files. Some anti-virus scanners, using traditional anti-virus techniques, can actually spread the virus. That is because they open and close files to scan them - and those acts give the virus additional chances to propagate. These same scanners will also fail to detect stealth viruses, because the act of opening the file for the scan causes the virus to temporarily disinfect the file, making it appear normal. Anti-Virus Tools And Techniques Anti-virus software tools can use any of a growing arsenal of weapons to detect and fight viruses, including active signature-based scanning, resident monitoring, checksum comparisons and generic expert systems. Each of these tools has its specific strengths and weaknesses. An anti-virus strategy that uses only one or two of the following techniques can leave you vulnerable to viruses designed to elude specific defenses. An anti-virus strategy that uses all of these techniques provides a comprehensive shield and the best possible defense against infection. Signature-Based Scanners Scanners - which, when activated, examine every file on a specified drive - can use any of a variety of anti-virus techniques. The most common is signature-based analysis. Signatures are the fingerprints of computer viruses - distinct strands of code that are unique to a single virus, much as DNA strands would be unique to a biological virus. Viruses, therefore, can be identified by their signatures. Virus researchers and anti-virus product developers catalog known viruses and their signatures, and signature-based scanners use these catalogs to search for viruses on a user's system. The best scanners have an exhaustive inventory of all viruses now known to exist. The signature-based scanner examines all possible locations for infection - boot sectors, system memory, partition tables and files - looking for strings of code that match the virus signatures stored in its memory. When the scanner identifies a signature match, it can identify the virus by name and indicate where on the hard disk or floppy disk the infection is located. Because the signature-based scanner offers a precise identification of known viruses, it can offer the best method for effective and complete removal. The scanner can also detect the virus before it has had a chance to run, reducing the chance that the infection will spread before detection. Against these benefits, the signature-based scanner has limitations. At best, it can only detect viruses for which it is programmed with a signature. It cannot detect so-called unknown viruses - those that have not been previously discovered, analyzed and recorded in the files of anti-virus software. Polymorphic viruses elude detection by altering the code string that the scanner is searching for; to identify these viruses, you need another technique. There is more than this... but it won't fit. PLease, let me email you the copy so I can have the password. f:\12000 essays\technology & computers (295)\Computer Secutity and the law.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ COMPUTER SECURITY AND THE LAW I. Introduction You are a computer administrator for a large manufacturing company. In the middle of a production run, all the mainframes on a crucial network grind to a halt. Production is delayed costing your company millions of dollars. Upon investigating, you find that a virus was released into the network through a specific account. When you confront the owner of the account, he claims he neither wrote nor released the virus, but he admits that he has distributed his password to "friends" who need ready access to his data files. Is he liable for the loss suffered by your company? In whole or in part? And if in part, for how much? These and related questions are the subject of computer law. The answers may very depending in which state the crime was committed and the judge who presides at the trial. Computer security law is new field, and the legal establishment has yet to reach broad agreement on may key issues. Advances in computer security law have been impeded by the reluctance on the part of lawyers and judges to grapple with the technical side of computer security issues[1]. This problem could be mitigated by involving technical computer security professional in the development of computer security law and public policy. This paper is meant to help bridge to gap between technical and legal computer security communities. II. THE TECHNOLOGICAL PERSPECTIVE A. The Objectives of Computer Security The principal objective of computer security is to protect and assure the confidentiality, integrity, and availability of automated information systems and the data they contain. Each of these terms has a precise meaning which is grounded in basic technical ideas about the flow of information in automated information systems. B. Basic Concepts There is a broad, top-level consensus regarding the meaning of most technical computer security concepts. This is partly because of government involvement in proposing, coordinating, and publishing the definitions of basic terms[2]. The meanings of the terms used in government directives and regulations are generally made to be consistent with past usage. This is not to say that there is no disagreement over the definitions in the technical community. Rather, the range of such disagreement is much narrower than in the legal community. For example there is presently no legal consensus on exactly what constitutes a computer[3]. The term used to establish the scope of computer security is "automated information system," often abbreviated "AIS." An Ais is an assembly of electronic equipment, hardware, software, and firmware configured to collect, create, communicate, disseminate, process, store and control data or information. This includes numerous items beyond the central processing unit and associated random access memory, such as input/output devises (keyboards, printers, etc.) Every AIS is used by subjects to act on objects. A subject is any active entity that causes information to flow among passive entities called objects. For example, subject could be a person typing commands which transfer information from a keyboard (an object) to memory (another object), or a process running on the central processing unit that is sending information from a file(an object) to a printer a printer(another object).2 Confidentiality is roughly equivalent to privacy. If a subject circumvents confidentiality measures designed to prevent it's access to an object, the object is said to be "comprised." Confidentiality is the most advanced area of computer security because the U.S. Department of Defense has invested heavily for many years to find way to maintain the confidentiality of classified data in AIS [4]. This investment has produced the Department of Defense trusted computer system evaluation criteria[5], alternatively called the Orange Book after the color of it's cover. The orange book is perhaps the single most authoritative document about protecting the confidentiality of data in classified AIS. Integrity measures are meant to protect data form unauthorized modification. The integrity of an object can be assessed by comparing it's current state to it's original or intended state. An object which has been modified by a subject with out proper authorization is sad to "corrupted." Technology for ensuring integrity has lagged behind that for confidentiality[4]. This is because the integrity problem has until recently been addressed by restricting access to AIS to trustworthy subjects. Today, the integrity threat is no longer tractable exclusively through access control. The desire for wide connectivity through networks and the increased us of commercial off the shelf software has limited the degree to which most AIS's can trust accelerating over the past few years, and will likely become as important a priority as confidentiality in the future. Availability means having an AIS system and it's associated objects accessible and functional when needed by it's user community. Attacks against availability are called denial of service attacks. For example, a subject may release a virus which absorbs so much processor time that the AIS system becomes overloaded. This is by far the least well developed of the three security properties, largely for technical reasons involving the formal verification of AIS designs[4]. Although such verification is not likely to become a practical reality for many years, techniques such as fault tolerance and software reliability are used to migrate the effects of denial service attacks. C. Computer Security Requirements The three security properties of confidentiality, integrity, and availability are acvhied by labeling the subjects and objects in an AIS and regulating the flow of information between them according to a predetermined set of rules called a security policy. The security policy specifies which subject labels can access which object labels. For example, suppose you went shopping and had to present your drives license to pick up some badges assigned to you at the entrance, each listing a brand name. The policy at some stores is that you can only buy the brand name listed on one of your badges. At the check-out lane, the cashier compares the brand names of each object you want to buy with names on your badges. If there's a match, she rings it up. But if you choose a brand name that doesn't appear on one of your badges she puts it back on the shelf. You could be sneaky and alter a badge, or pretend to be your neighbor who has more badges than you, or find a clerk who will turn a blind eye. No doubt the store would employ a host of measures to prevent you from cheating. The same situation exists on secure computer systems. Security measure are employed to prevent illicit tampering with labels, positively identify subjects, and provide assurance that the security measures are doing the job correctly. A comprehensive list of minimal requirements to secure an AIS are presented in The Orange Book[5]. III The Legal Perspective A. Sources Of Computer Law The three branches of the government, legislative, executive, and judicial, produce quantities of computer law which are inversely proportional to the amount of coordination needed for it's enactment. The legislative branch, consisting of the Congress and fifty state legislators, produce the smallest amount if law which is worded in the most general terms. For example, the Congress may pass a bill mandating that sensitive information in government computers be protected. The executive branch, consisting of the president and numerous agencies, issues regulations which implement the bills passed by legislators. Finally, the judicial branch serves as an avenue of appeal and decides the meaning of the laws and regulations in specific cases. After the decisions are issued, and in some cases appealed, they are taken as the word of the law in legally similar situations. B. Current Views On Computer Crime Currently there is no universal argument in the legal community on what constitutes a computer crime. One reason is the rapidly changing state of computer technology. For example in 1979, the U.S. Department of justice publication[6] partitioned computer crime into three categories: 1) Computer abuse, "the broad range of international acts involving a computer where one or more perpetrators made or could have made gain and one or victims suffered or could have suffered a loss." Computer crime, "Illegal computer abuse the implies direct involvement of computers in committing a crime. 3) Computer related crimes "Any illegal act for which a knowledge of computer technology is essential for successful prosecution." These definitions have become blurred by the vast proliferation of computers and computer related products over the last decade. For example, does altering an inventory bar code at a store constitute computer abuse? Should a person caught in such an act be prosecuted both under theft and computer abuse laws? Clearly, advances in computer technology should be mirrored by parallel changes in computer laws. Another attempt to describe the essential features of computer crimes has been made by wolk and Luddy[1]. They claim that the majority of crimes committed against or which the use of a computer can be classified. These crimes are classified as follows: 1) sabotage, "involves an attack against the entire computer system, or against it's sub components, and may be the product of foreign involvement or penetration by a competitor." 2) Theft of services, "using a computer at someone else's expense. 3) Property crime involving the "theft of property by and through the use of a computer. A good definition of computer crime should capture all acts which are criminal and involve computers and only those acts. Assessing the completeness of a definition seems problematic, tractable using technical computer security concepts. IV. Conclusion The development of effective computer security law and public policy cannot be accomplished without cooperation between the technical and legal communities. The inherently abstruse nature of computer technology and the importance of social issues it generates demands the combined talents of both. At stake is not only a fair and just interpretation of the law as it pertains to computers, but more basic issues involving the protection of civil rights. Technological developments have challenged these rights in the past and have been met with laws and public policies which have regulated their use. For example the use of the telegraph and telephone gave rise to privacy laws pertaining to wire communications. We need to meet advances in automated information technology with legislation that preserves civil liberties and establishes legal boundaries for protecting confidentiality, integrity, and assured service. Legal and computer professionals have a vital role in meeting this challenge together. REFERENCES [1] Stuart R. Wolk and William J. Luddy Jr., "Legal Aspects of Computer Use" Prentice Hall, 1986,pg. 129 [2] National Computer Security Center, "Glossary of Computer Security Terms" October 21,1988 [3] Thomas R. Mylott III, "Computer Law for the Computer Professional," Prentice Hall, 1984,p.g. 131.e [4] Gasser, Morrie, "Building a Secure Computer System" Van Nostrand, 1988. [5] Department of Defense, "Department of Defense Trusted Computer System Evaluation Criteria," December 1985 [6] United States Department of Justice, "Computer Crime, Criminal Justice Resource Manual," 1979 COMPUTER SECURITY AND THE LAW I. Introduction You are a computer administrator for a large manufacturing company. In the middle of a production run, all the mainframes on a crucial network grind to a halt. Production is delayed costing your company millions of dollars. Upon investigating, you find that a virus was released into the network through a specific account. When you confront the owner of the account, he claims he neither wrote nor released the virus, but he admits that he has distributed his password to "friends" who need ready access to his data files. Is he liable for the loss suffered by your company? In whole or in part? And if in part, for how much? These and related questions are the subject of computer law. The answers may very depending in which state the crime was committed and the judge who presides at the trial. Computer security law is new field, and the legal establishment has yet to reach broad agreement on may key issues. Advances in computer security law have been impeded by the reluctance on the part of lawyers and judges to grapple with the technical side of computer security issues[1]. This problem could be mitigated by involving technical computer security professional in the development of computer security law and public policy. This paper is meant to help bridge to gap between technical and legal computer security communities. II. THE TECHNOLOGICAL PERSPECTIVE A. The Objectives of Computer Security The principal objective of computer security is to protect and assure the confidentiality, integrity, and availability of automated information systems and the data they contain. Each of these terms has a precise meaning which is grounded in basic technical ideas about the flow of information in automated information systems. B. Basic Concepts There is a broad, top-level consensus regarding the meaning of most technical computer security concepts. This is partly because of government involvement in proposing, coordinating, and publishing the definitions of basic terms[2]. The meanings of the terms used in government directives and regulations are generally made to be consistent with past usage. This is not to say that there is no disagreement over the definitions in the technical community. Rather, the range of such disagreement is much narrower than in the legal community. For example there is presently no legal consensus on exactly what constitutes a computer[3]. The term used to establish the scope of computer security is "automated information system," often abbreviated "AIS." An Ais is an assembly of electronic equipment, hardware, software, and firmware configured to collect, create, communicate, disseminate, process, store and control data or information. This includes numerous items beyond the central processing unit and associated random access memory, such as input/output devises (keyboards, printers, etc.) Every AIS is used by subjects to act on objects. A subject is any active entity that causes information to flow among passive entities called objects. For example, subject could be a person typing commands which transfer information from a keyboard (an object) to memory (another object), or a process running on the central processing unit that is sending information from a file(an object) to a printer a printer(another object).2 Confidentiality is roughly equivalent to privacy. If a subject circumvents confidentiality measures designed to prevent it's access to an object, the object is said to be "comprised." Confidentiality is the most advanced area of computer security because the U.S. Department of Defense has invested heavily for many years to find way to maintain the confidentiality of classified data in AIS [4]. This investment has produced the Department of Defense trusted computer system evaluation criteria[5], alternatively called the Orange Book after the color of it's cover. The orange book is perhaps the single most authoritative document about protecting the confidentiality of data in classified AIS. Integrity measures are meant to protect data form unauthorized modification. The integrity of an object can be assessed by comparing it's current state to it's original or intended state. An object which has been modified by a subject with out proper authorization is sad to "corrupted." Technology for ensuring integrity has lagged behind that for confidentiality[4]. This is because the integrity problem has until recently been addressed by restricting access to AIS to trustworthy subjects. Today, the integrity threat is no longer tractable exclusively through access control. The desire for wide connectivity through networks and the increased us of commercial off the shelf software has limited the degree to which most AIS's can trust accelerating over the past few years, and will likely become as important a priority as confidentiality in the future. Availability means having an AIS system and it's associated objects accessible and functional when needed by it's user community. Attacks against availability are called denial of service attacks. For example, a subject may release a virus which absorbs so much processor time that the AIS system becomes overloaded. This is by far the least well developed of the three security properties, largely for technical reasons involving the formal verification of AIS designs[4]. Although such verification is not likely to become a practical reality for many years, techniques such as fault tolerance and software reliability are used to migrate the effects of denial service attacks. C. Computer Security Requirements The three security properties of confidentiality, integrity, and availability are acvhied by labeling the subjects and objects in an AIS and regulating the flow of information between them according to a predetermined set of rules called a security policy. The security policy specifies which subject labels can access which object labels. For example, suppose you went shopping and had to present your drives license to pick up some badges assigned to you at the entrance, each listing a brand name. The policy at some stores is that you can only buy the brand name listed on one of your badges. At the check-out lane, the cashier compares the brand names of each object you want to buy with names on your badges. If there's a match, she rings it up. But if you choose a brand name that doesn't appear on one of your badges she puts it back on the shelf. You could be sneaky and alter a badge, or pretend to be your neighbor who has more badges than you, or find a clerk who will turn a blind eye. No doubt the store would employ a host of measures to prevent you from cheating. The same situation exists on secure computer systems. Security measure are employed to prevent illicit tampering with labels, positively identify subjects, and provide assurance that the security measures are doing the job correctly. A comprehensive list of minimal requirements to secure an AIS are presented in The Orange Book[5]. III The Legal Perspective A. Sources Of Computer Law The three branches of the government, legislative, executive, and judicial, produce quantities of computer law which are inversely proportional to the amount of coordination needed for it's enactment. The legislative branch, consisting of the Congress and fifty state legislators, produce the smallest amount if law which is worded in the most general terms. For example, the Congress may pass a bill mandating that sensitive information in government computers be protected. The executive branch, consisting of the president and numerous agencies, issues regulations which implement the bills passed by legislators. Finally, the judicial branch serves as an avenue of appeal and decides the meaning of the laws and regulations in specific cases. After the decisions are issued, and in some cases appealed, they are taken as the word of the law in legally similar situations. B. Current Views On Computer Crime Currently there is no universal argument in the legal community on what constitutes a computer crime. One reason is the rapidly changing state of computer technology. For example in 1979, the U.S. Department of justice publication[6] partitioned computer crime into three categories: 1) Computer abuse, "the broad range of international acts involving a computer where one or more perpetrators made or could have made gain and one or victims suffered or could have suffered a loss." Computer crime, "Illegal computer abuse the implies direct involvement of computers in committing a crime. 3) Computer related crimes "Any illegal act for which a knowledge of computer technology is essential for successful prosecution." These definitions have become blurred by the vast proliferation of computers and computer related products over the last decade. For example, does altering an inventory bar code at a store constitute computer abuse? Should a person caught in such an act be prosecuted both under theft and computer abuse laws? Clearly, advances in computer technology should be mirrored by parallel changes in computer laws. Another attempt to describe the essential features of computer crimes has been made by wolk and Luddy[1]. They claim that the majority of crimes committed against or which the use of a computer can be classified. These crimes are classified as follows: 1) sabotage, "involves an attack against the entire computer system, or against it's sub components, and may be the product of foreign involvement or penetration by a competitor." 2) Theft of services, "using a computer at someone else's expense. 3) Property crime involving the "theft of property by and through the use of a computer. A good definition of computer crime should capture all acts which are criminal and involve computers and only those acts. Assessing the completeness of a definition seems problematic, tractable using technical computer security concepts. IV. Conclusion The development of effective computer security law and public policy cannot be accomplished without cooperation between the technical and legal communities. The inherently abstruse nature of computer technology and the importance of social issues it generates demands the combined talents of both. At stake is not only a fair and just interpretation of the law as it pertains to computers, but more basic issues involving the protection of civil rights. Technological developments have challenged these rights in the past and have been met with laws and public policies which have regulated their use. For example the use of the telegraph and telephone gave rise to privacy laws pertaining to wire communications. We need to meet advances in automated information technology with legislation that preserves civil liberties and establishes legal boundaries for protecting confidentiality, integrity, and assured service. Legal and computer professionals have a vital role in meeting this challenge together. REFERENCES [1] Stuart R. Wolk and William J. Luddy Jr., "Legal Aspects of Computer Use" Prentice Hall, 1986,pg. 129 [2] National Computer Security Center, "Glossary of Computer Security Terms" October 21,1988 [3] Thomas R. Mylott III, "Computer Law for the Computer Professional," Prentice Hall, 1984,p.g. 131.e [4] Gasser, Morrie, "Building a Secure Computer System" Van Nostrand, 1988. [5] Department of Defense, "Department of Defense Trusted Computer System Evaluation Criteria," December 1985 [6] United States Department of Justice, "Computer Crime, Criminal Justice Resource Manual," 1979 COMPUTER SECURITY AND THE LAW I. Introduction You are a computer administrator for a large manufacturing company. In the middle of a production run, all the mainframes on a crucial network grind to a halt. Production is delayed costing your company millions of dollars. Upon investigating, you find that a virus was released into the network through a specific account. When you confront the owner of the account, he claims he neither wrote nor released the virus, but he admits that he has distributed his password to "friends" who need ready access to his data files. Is he liable for the loss suffered by your company? In whole or in part? And if in part, for how much? These and related questions are the subject of computer law. The answers may very depending in which state the crime was committed and the judge who presides at the trial. Computer security law is new field, and the legal establishment has yet to reach broad agreement on may key issues. Advances in computer security law have been impeded by the reluctance on the part of lawyers and judges to grapple with the technical side of computer security issues[1]. This problem could be mitigated by involving technical computer security professional in the development of computer security law and public policy. This paper is meant to help bridge to gap between technical and legal computer security communities. II. THE TECHNOLOGICAL PERSPECTIVE A. The Objectives of Computer Security The principal objective of computer security is to protect and assure the confidentiality, integrity, and availability of automated information systems and the data they contain. Each of these terms has a precise meaning which is grounded in basic technical ideas about the flow of information in automated information systems. B. Basic Concepts There is a broad, top-level consensus regarding the meaning of most technical computer security concepts. This is partly because of government involvement in proposing, coordinating, and publishing the definitions of basic terms[2]. The meanings of the terms used in government directives and regulations are generally made to be consistent with past usage. This is not to say that there is no disagreement over the definitions in the technical community. Rather, the range of such disagreement is much narrower than in the legal community. For example there is presently no legal consensus on exactly what constitutes a computer[3]. The term used to establish the scope of computer security is "automated information system," often abbreviated "AIS." An Ais is an assembly of electronic equipment, hardware, software, and firmware configured to collect, create, communicate, disseminate, process, store and control data or information. This includes numerous items beyond the central processing unit and associated random access memory, such as input/output devises (keyboards, printers, etc.) Every AIS is used by subjects to act on objects. A subject is any active entity that causes information to flow among passive entities called objects. For example, subject could be a person typing commands which transfer information from a keyboard (an object) to memory (another object), or a process running on the central processing unit that is sending information from a file(an object) to a printer a printer(another object).2 Confidentiality is roughly equivalent to privacy. If a subject circumvents confidentiality measures designed to prevent it's access to an object, the object is said to be "comprised." Confidentiality is the most advanced area of computer security because the U.S. Department of Defense has invested heavily for many years to find way to maintain the confidentiality of classified data in AIS [4]. This investment has produced the Department of Defense trusted computer system evaluation criteria[5], alternatively called the Orange Book after the color of it's cover. The orange book is perhaps the single most authoritative document about protecting the confidentiality of data in classified AIS. Integrity measures are meant to protect data form unauthorized modification. The integrity of an object can be assessed by comparing it's current state to it's original or intended state. An object which has been modified by a subject with out proper authorization is sad to "corrupted." Technology for ensuring integrity has lagged behind that for confidentiality[4]. This is because the integrity problem has until recently been addressed by restricting access to AIS to trustworthy subjects. Today, the integrity threat is no longer tractable exclusively through access control. The desire for wide connectivity through networks and the increased us of commercial off the shelf software has limited the degree to which most AIS's can trust accelerating over the past few years, and will likely become as important a priority as confidentiality in the future. Availability means having an AIS system and it's associated objects accessible and functional when needed by it's user community. Attacks against availability are called denial of service attacks. For example, a subject may release a virus which absorbs so much processor time that the AIS system becomes overloaded. This is by far the least well developed of the three security properties, largely for technical reasons involving the formal verification of AIS designs[4]. Although such verification is not likely to become a practical reality for many years, techniques such as fault tolerance and software reliability are used to migrate the effects of denial service attacks. C. Computer Security Requirements The three security properties of confidentiality, integrity, and availability are acvhied by labeling the subjects and objects in an AIS and regulating the flow of information between them according to a predetermined set of rules called a security policy. The security policy specifies which subject la f:\12000 essays\technology & computers (295)\Computer Software Priacy and its Impact on the International .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Software Piracy and it's Impact on the International Economy The PC industry is over twenty years old. In those twenty years, evolving software technology brings us faster, more sophisticated, versatile and easy-to-use products. Business software allows companies to save time, effort and money. Educational computer programs teach basic skills and complicated subjects. Home software now includes a wide variety of programs that enhance the users productivity and creativity. The industry is thriving and users stand to benefit along with the publishers. The SPA (Software Publishers Association) reports that the problem of software theft has grown, and threatens to prevent the development of new software products. Unauthorized duplication of software is known as software piracy which is a "Federal offense that affects everyone" ("Software Use..." Internet). The following research examines software piracy in its various forms, its impact on the end user and the international industry as a whole, and the progress that has been made in alleviating the problem. Software piracy harms all software companies and ultimately, the end user. Piracy results in higher prices for honest users, reduced levels of support and delays in funding and development of new products, causing the overall breadth and quality of software to suffer" ("What is..." Internet). Even the users of unlawful copies suffer from their own illegal actions: they receive no documentation, no customer support and no information about product updates ("Software Use..." Internet). The White Paper says that while virtually every software publisher expresses concern about their software from unauthorized duplication, over time, many have simply accepted the so-called "fact" that such duplication is unavoidable. This has created an atmosphere in which software piracy is commonly accepted as "just another cost of doing business" ("With the Growth..." Internet). In a brochure published by the SPA it is stated that a major problem arises from the fact that most people do not even know they are breaking the law. "Because the software industry is relatively new, and because copying software is so easy, many people are either unaware of the laws governing software use or choose to ignore them" ("To Copy or not to Copy" Internet). Robert Perry states that much of the problem of software theft arises from the way the software industry developed. In the past, when a software firm spent millions of dollars to write a program for a mainframe computer, it knew it would sell a handful of copies. It licensed each copy to protect its ownership rights and control the use of each copy. That is easy to do with only a few copies of a program. It is impossible for a software company to handle five million copies of there latest program (27). Software piracy is defined as any violations of software license agreements. In 1964, the United States Copyright Office began to register software as a form of literary expression. The Copyright Act, title 17 of the U.S. Code, was amended in 1980 to explicitly include computer programs. Today, according to the Copyright Act, it is illegal to make or distribute copyrighted material without authorization, the only exceptions are the user's right to make as an "essential step" in using the program (for example, by copying the program into RAM or on the hard drive) and to make a single backup copy for "archival purposes." No other copies may be made without specific authorization from the copyright owner (title 17 section 117). A SPA press release shows that in December 1990, the U.S. Congress approved the Software Rental Amendments Act, which generally prohibits the rental, leasing or lending of software with out the express written permission of the copyright holder ("Retailers Agree..." Internet). "It doesn't mater whether the transaction is called 'rental, 'buy-back,' 'try before you buy,' preview,' 'evaluation' or any similar term. If the software dealer does not have written permission from the copyright holders to rent software, it is illegal to do so." said Sandra Sellers, SPA vice president of intellectual property education and enforcement ("SPA sues..." Internet.") NERDC information services researched that the copyright holder may grant additional rights at the time the personal computer software is acquired. For example, many applications are sold in LAN (local area network) versions that allow a software package to be placed on a LAN for access by multiple users. Additionally, permission is given under special license agreement to make multiple copies for use throughout a large organization. However unless these rights are specifically granted, U.S. law prohibits a user from making duplicate copies of software except to ensure one working copy and one archival copy (NERDC Internet). Without authorization from the copyright owner, title 18 of U.S. Code prohibits duplicating software for profit, making multiple copies for use by different users within an organization, downloading multiple copies from a network, or giving an unauthorized copy to another individual. All are illegal and a federal crime. Penalties include fines up to $250, 000 and jail terms up to five years (Title 18, Section 2320 and 2322). Microsoft states that illegal copying of personal computer software is a crucial dilemma both in the United States and over seas. Piracy is widely practiced and widely tolerated, in some countries, legal protection for software is non existent; in others laws are unclear, or not enforced with sufficient commitment. Significant piracy losses are suffered in virtually ever region of the world. In Some cases, like Indonesia, the rate of unauthorized copies is believed to be in excess of ninety-nine percent ("What is..." Internet). Copyright laws vary widely from country to country, as do interpretations of the laws and the degree to which they are enforced. The concept of protecting the intellectual property incorporated in software is not universally recognized. Asia is one of the most technologically advanced regions of the world. As the software market continues to grow and flourish so does the black market of software piracy ("The Impact..." Internet). The worst countries in this area are China and Russia. Named "one copy countries" two years in a row (1995 and 1996) by the SPA, studies show that ninety-five to ninety-eight percent, virtually every copy, of U.S. business software is illegally pirated, which costs U.S. software companies an estimated five- hundred million dollars a year ("SPA names..." Internet and "U.S., China..." D1 - 2). In Russia the latest statistics from the SPA show that ninety-five percent of business software is illegally copied, that cost the U.S. $117 million in 1994 ("SPA names..." Internet). Although Asia has extremely high piracy rates, SPA Executive Director Ken Wasch comments "China, Russia, and Thailand (the three countries in Asia with the highest piracy rates) deserve credit for enacting copyright laws that specifically protect computer programs and other software..." Russia and China enacted copyright protection statutes several years ago, and Thailand enacted its law late in 1994 ("SPA names..." Internet). Asian countries have also taken action against offenders of copyright laws. The SPA reports that "on Wednesday, May 22, 1996, Hong Kong Customs officers arrested two suspected software pirate vendors and seized 20 CD-ROMs, each containing software with an estimated total retail value of US$20,000 along with the equipment capable of reproducing the pirate CDs" ("Hong Kong..." Internet). A Software Publishers Association press release shows more examples of Asia's fight against software piracy when Singapore police raided vans carrying 5,800 CD-ROMs containing $700,000 U.S. dollars worth of pirated software on March 25, 1996 ("SPA, Singapore..." Internet). The Bloomberg forum reports that on August 7, 1995 China anti-piracy forces invaded stores in the southwestern city of Chengdu and arrested 37 people. The Business Software Alliance's "vice president Stephanie Mitchell said while that was the largest number of people so far arrested in a single raid on software retailers, China must dish out harder punishments to discourage pirates after their caught" ("China takes..." Internet). A result of China's lack of strictness, the SPA called upon the USTR (U.S. Trade Representative) "...to take action against China under Section 306 of the Trade Act of 1974 for failing to improve enforcement of intellectual property right in computer software." Also Russia and Korea were placed on the Special 301 Priority Watch List by the USTR so that the SPA is able to review their intellectual property laws and enforcement ("China and Russia..." Internet). "The United States and China signed a major accord in March of 1996 mandating tough enforcement against intellectual property piracy in China..."(Parker np). The BSA's European anti-piracy program is comprised of over 20 countries through out the region and was initiated in 1989 "...with the filing of the software industry's first enforcement action for the illegal use of software in Italy". Piracy continues to be a significant problem in spite of the enactment of stronger copyright laws and successful prosecutions against software theft. "The average piracy rates of 25 European countries was estimated at 58 percent in 1994, with dollar losses exceeding $6 billion" ("The Impact..." Internet). Microsoft's studies show that many European countries including some which offer computer software protection, have "unreasonably burdensome" administrative rules. Poland and the United Kingdom have displayed difficulty in collecting evidence and Greece is blamed for "fragmentation of court process." Most European countries do not have sufficient penalties and inadequate civil enforcement possibilities to discourage piracy, especially Germany, Poland, Sweden and the UK. "Several countries, for example, Belarus and Romania, have general copyright laws that protect literary expression, but fail to clearly protect computer software" ("What is.." Internet). Ireland is Europe's worst offender with yearly losses of more then forty-four million dollars per year due to the fact that eighty-three percent of software is pirated ("Software Piracy: Ireland..." Internet). The BSA "called for legislative reform and stricter observance of laws" after reviewing a study examining Europe's software piracy rates. The BSA argues that "experience has shown that improved legal protection for software copyright, and better policing by private companies and governments, can lead to a significant reduction in the number of illegal copies being made" ("Software Piracy: Ireland..." Internet). Latin America is the second fastest growing market for package software ("The Impact..." Internet). SPA president Ken Wasch said, "The encouraging first quarters sales data (1995) confirms Brazil's status as a major market for U.S. software publishers. With a rapidly growing and increasing sophisticated economy. The potential for U.S. software companies in Brazil is enormous" ("Latin America..." Internet). Gowning along with the increase of sales and production is the threat of software theft "with the average piracy rate in 16 countries estimated at seventy-eight percent in 1994" ("The Impact..." Internet). The effect of international piracy organizations is a major problem that everyone is aware of. Another element which is beginging to make its presence known is the small- time software pirates that distribute software on BBSs (Bulletin Board Systems) or over the Internet. As with most topics dealing with the extremely new Internet underground and Internet crimes, it is very difficult to obtain information on these subjects. In order to acquire information about these underground Internet crimes, which are important to fully understand the concept of software piracy, most of the subject matter is supplied by my own personal observations and investigations. Most small-time software piracy centers around bulletin board systems that specialize in "warez" (common underground term for pirated software). On these systems, pirates can contribute and share copies of commercial software. Having access to these systems (usually obtained by contributing copyrighted programs via telephone modem or money donations) allows the pirate to copy, or "download," copyrighted software. All the participants benefit because individuals must "upload" (copy files from their system to the BBS) copyrighted programs in order to download. This way new programs are appearing continuously. My observation reveals how pirates have found ways to become more efficient by creating mutual participation "pirate groups" (as referred to by the computer underground). These groups are composed of ten to seventy members contributing in different ways. The members usually are anywhere from thirteen to thirty years of age. Some pirate groups are international, with members operating from different regions of the world. Their primary purpose is to obtain the latest software, remove any copy- protection from it and then distribute it to the pirate community. The methods the pirates use to obtain the software is only known by the members of the pirate groups themselves. Some speculate that the members either "hack" (break into a computer via modem from one's own system) into computers of software companies and steal the software or "pay off" employees of software companies. The software they receive is almost always less then one day old and is often referred to as a "zero day ware." "The Internet is an incredible international electronic information system providing millions with access to education, entertainment. and business resources, as well as promoting new forms of personal communication, including e-mail and on-line chatting" (Larson Internet). This also creates ideal piracy breeding grounds. Software pirates utilize the services of the Internet to "trade" copyrighted "warez." In 1994 the Washington Post reported about an individual who had set up a computer bulletin board system connected to the Internet, that allowed over one million dollars worth of software to be copied. People using the Internet computer network were able to retrieve commercial software from this BBS for free. The sysop (system operator or person operating the BBS) was charged with fraud and copyright infringement but never convicted because of "murky" laws (Daly, D1). IRC (Internet relay chat) is an Internet service that enables people all over the world to communicate with each other by means of "switching" channels and typing messages on the screen. IRC also allows individual to "post" files in selected channels most of which are copyrighted software available for trade. If someone sees a particular program they want, all they have to do is "tag" the file for download and it is copied onto their local hard drive. With the exception of the real-time "chatting" capabilities of IRC, most of the functions of USENET are the same. USENET is a message network available on the Internet where users post public messages, on almost any topic imaginable, in hopes of getting an answer. Like IRC users can attach files to the messages, some of which are copyrighted programs. Through my own analysis I have found that software pirates have found USENET and IRC to be extremely efficient ways to provide and trade copyrighted software, which is beginning to make BBS use obsolete. On-line services such as America Online, Prodigy, and CompuServe combine the ease of use of BBSs and the capabilities of the Internet. Most on-line services provide e- mail, virtual chat rooms, file areas and even access to the Internet. Software pirate groups are found utilizing these on-line services to trade copyrighted software and with over 1.25 million other users on-line, they can go about unnoticed. David Pogue, a writer for MacWorld says that members of these pirate groups sign on by using fake credit card numbers and phony personal information. While on-line, the pirates trade copyrighted software or "warez" by e-mailing them to each other and using chat rooms to receive new programs (Pogue 37). Most anti-piracy organizations have taken little, if any, action against this new wave of software piracy. The Software industry looses millions if not billions of dollars to small-time software pirates. On the pirates' side is the safety of private bulletin boards, unclear laws, the vast size of on-line services and the fact that IRC and USENET are completely lawless. There are no laws, no restrictions and no one to stop the software pirates from committing their crimes. This permits pirates to go virtually undetected and free from punishment. In a article on computer crime in Newsweek a spokes woman for the on-line service Prodigy speaks about the Internet: "Its the Wild West. No one owns it. It has no rules" (Meyer 36-38). Microsoft says major software developers recognize that piracy is a problem. They have begun taking steps to alleviate the problem. The software industry realizes that the problem of software piracy cannot be solved by one company alone. Computer companies have "made a commitment to address the problem together." Software publishers are taking an active role in directly addressing software piracy by monitoring markets, conducting investigations, and pursuing litigation on their own as well as through the Business Software Alliance (BSA) and the Software Publishers Association (SPA) ("What is..." Internet). The White Paper lists "a number of potential solutions to software piracy that software publishers have used over time." Package warning and license labeling makes users aware of the consequences of illegal use of the software but usually are ignored by the user. High profile "piracy busts" and legal action against organized counterfeiters by anti-piracy organizations such as the SPA and BSA are "essentially sending a message to pirates that there are real risks associated with illegally coping software." Site Licensing is a "popular" and "cost-effective" way of selling software to large organizations who need more then one copy of the software. Forced registration and support contracts only effect novice computer users because experts don't necessarily need technical support or manuals ("With the Growth..." Internet). Software piracy is a worldwide problem; one that is making an impact on the international economy and currently costing the software publishing industry more than fifteen billion dollars per year in lost revenues. With the growing interest in the distribution of software over the Internet and on-line services, the potential for these losses to increase is very real. Software publishers have used a number of alternative methods to protect their intellectual property, but have generally achieved marginal success in reducing losses to piracy. Works Cited "China and Russia Again Named 'One Copy Countries' by the SPA in special 301 Report." Software Publishers Association. Press Release. Washington D.C. 20 Feb 1996. URL: http://www.spa.org/gvmt/spa301.htm. "China Takes Software Piracy Clampdown Inland." Bloomberg Forum. 1995. News and Observer. URL: http://www.nando.net/new...fo/080785/info518_5.html. Daly, Christopher B. "Judge Dismisses Fraud Charges Against Student in Software Case." Washington Post. 30 Dec 1994: D1. NewsBank CD-ROM 1995. "Hong Kong Software Pirates Arrested Due to SPA Investigation." Software Publishers Association. Press Release. Washington D.C. 4 June 1996. URL: http://www.spa.org/piracy/releases/hongk.htm. Larson, Megan J. "Copyright in Cyberspace." ts. U of Oregon, 1995. URL: http://gladstone. uoregon.edu/%7Emega/Copy.html. "Latin America Software Sales Reach $48.2 Million in First Quarter 1995." Software Publishers Association. Press Release. Washington D.C. 13 Feb 1995. URL: http://www.spa.org/research/95q1lati.htm. Meyer, Michael. "Stop! Cyberthief!" Newsweek. 6 Feb 1995: 36-38. SIRS Researcher CD- ROM, 1995. Art 103. Parker, Jerry. "China Tackles Software Piracy at State Agencies." Reuters: 14 April 1995: np.NewsBank CD-ROM 1995. Perry, Robert L. Computer Crime. New York: Franklin Watts, 1986. "Retailers Agree Not to Rent Computer Software Without Permission From Publishers." Software Publishers Association. Press Release. Washington D.C. 7 Feb, 1996. URL: http://www.spa.org/piracy/releases/swrental.htm. "Software Piracy - It's not Worth the Risk." NERDC Information Service. URL: http://nervm.nerdc.ufl.edu/update/U9506O7A.html. "Software Piracy: Ireland is Europe's Worst Offender." IBCE News. URL: http:///www.iol.ie/ibc/news/IBEC/january/4.htm. Software Publishers Association. Software Use and the Law. Washington D.C.: SPA 1995. URL:http://www.spa.org/piracy/sftuse.htm. Software Publishers Association. To Copy or Not to Copy. Washington D.C.: SPA 1996. URL: http://www.spa.org/piracy/okay.htm. "SPA Names Russia, China 'One Copy Countries.'" Software Publishers Association. Press Release. Washington D.C. 13 Feb 1995. URL: http://www.spa.org/gvmt/ onecopy.html. "SPA, Singapore Police, and AACT Raid Vans Carrying Pirated Software." Software PublishersAssociation. Press Release. Washington D.C. 4 June 1996. URL: http://www.spa.org/piracy/releases/singapor.htm. "SPA Sues Six U.S. Software Rental Companies." Software Publishers Association. Press Release. Washington D.C. 28 Feb 1996. URL: http//www.spa.org/piracy/releases/ rentsuit.htm. "The Impact of Software Piracy on the International Market Place." URL: http://198.105.234.4/ piracy/rgnifact.htm. United States. U.S. Code: Copyright Acts. Title 17, Sec 17. United States. U.S. Code: Copyright Acts. Title 18, Sec 2320 and 2322. "U.S., China Avert Trade War." Sun-Sentinel 18 June 1996: 1D - 2. "With the Growth of Worldwide Software Piracy and the Emergence of On-Line Software Distribution, Protecting Intellectual Property is now More Critical than Ever." The White Pages. URL: http://www.hasd.com/hasd/misc/white.htm. "What is Software Piracy?" Microsoft Anti-Piracy Home Page. 1995. URL: http://198.105.232.4/piracy/intlrep.htm. f:\12000 essays\technology & computers (295)\Computer System in the Context of Retail Business.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computing Studies Assignment Computer System in the Context of Retail Business w Today, retailing businesses have to had up to date technology in order to be successful. Accurate, efficient communication sending and receiving can affect the business. So it is very important that to have the latest technology such as computers and networks. Retailing on a local and global scale can also affect how successful is the business. Locally, efficient networking that retailing businesses had allow customers purchase goods more faster such as the new bar-code scanners in supermarkets helps customers reduce time waiting in order to purchase goods. Globally, such as trading, eg: a computer retailing store may like to purchase some stock from over seas, they can make contracts by using the Internet. w Computer systems in retail trading on a local and global scale played an important role in today's society. Computer systems such as : the supermarket POS system, provides efficient and accurate calculations when customers purchasing goods. Absolut Software, provides a host of state-of-the-art capabilities vital for increasing sales and productivity. Absolut Software will easily reduce the number of operators and supporting hardware by 15 percent. Absolut Software provides a training mode for novices and a high-speed mode for the experienced. Features: * Complete mailing list management * Promotion tracking * Catalog and telemarketing * Importing sub-system * On-line order entry * Inventory control (multi-site, serialized, lot number, decimal quantities, and style-color-size) * Credit card billing * Computer-driven in-store POS with "Suspend and Hold;" availability displayed * Wildcard search * User-definable extended search * Unlimited text and binary (graphics) storage for key fields * A complete financial (A/R, A/P, G/L) sub-system * E.D.I. And the international network system which brought customers information, e-mail address, and contract forms of customers are interested. w A retail computer system can do tasks such as: Stock control is the control and how much stock were around the business, and the price of the items. Eg: Such as the SMART System is a totally integrated and interactive retail business system that includes the following modules: Order Entry, Inventory Management, Sales Analysis, Accounts Payable, Accounts Receivable, Monthly Lease, Financial Accounting, Payroll, and Customer Mailing. Modules may be purchased separately if desired. An optional module for contracts and insurance calculation and form printing, known as EZYCALC, is also available and can be integrated into the system. The system was originally programmed for the Retail Furniture Industry but works equally well for any big ticket retail operation. All affected files are updated as each transaction is keyed providing real-time information and reports also the system will handle multiple and remote store operations. Personnel Management is the system which kept the record of employees and staff, also the information of their salary, holidays and absent days. The system is designed to manage is wages and staff going right or not. Checkout Eg: The Easy Sale Scan is a bar-code scanning point of sale application which automatically organizes business functions from any central computer site. Bar-code scanning minimizes keyboard data entry errors and user frustration while providing unlimited information selection, transmission, updating, and reporting. Features: * Automatically calculates quantity discounts, special customer discounts, sales tax amounts, credit limit verification, and change due. * The Work-in-Progress feature automatically generates a current production schedule of orders to be processed by customer or product. * The Inventory feature flags price fluctuations or dangerously low stock quantities on site. * A built-in expert system displays the latest trends for re-order decision support. * A universal API provides open connectivity to any database or hardware platform. Easy Sale Scan is currently portable to over 140 computer platforms. Customised labels can help customers to know which region of store sales which item and show customers new items which were out. w Hardware played by retailing store: Point of Sale System is designed for retail and/or wholesale businesses that need to generate at-counter customer invoices and monitor product inventory levels. This easy-to-use system improves employee cash handling efficiency and accuracy by displaying all cash transaction data. It supports split tender payments, unit of measure pricing (including metric pricing) and "look up" features that allow the user to view product, price, and customer information directly on the terminal. The reports generated include margin, price, inventory valuation, minimum on hand, and daily summary. For multistore chains and franchises, a remote store processing add-on is available. The system interfaces with Accounts Receivable, General Ledger, and Purchase Order Management. Single and multi-user versions are available.. The electronic funds transfer and payment processing system. The system performs credit card sales, refunds and voids. It also performs debit card sales and voids and has full transaction "stand-in" store-and-forward capabilities to maintain sales activity if transmission lines go down. The Central Computer contains all necessary information in the business and operates as a server. It contains data such as database on items and employees. w Retailing businesses has been effected by the introduction of computer systems by the demand of the customers. Such as efficient and accurate calculation when purchasing goods, convenient cash transfer ( cash registers) and efficient of shock keeping and record keeping. w The development of technology brought about by the needs of the retail trade: Customised Software- as the growth of large and small retailing store and the demand of efficient systems were increasing. So developers were starting to increase and to produce more produces. Bar code readers- designed to be more efficient when calculating and reading prices off the item. Eftpos- the Electronic funds transfer at point of sale system were also designed to be convenient to customers who wouldn't carry a vast amount of money around when purchasing goods. International bar code convention- Bar codes were placed onto every package of product. When bar code readers attached the product and it will automatically display the product's information and calculate the price. w Retailers have dealt with the issues of: Privacy- not allowing people to access the database of the store and may set password to protect it. The nature of work- Many jobs were done by people before but they can been taking over by more faster and powerful computers which enable jobs to be done more efficiently and more accurate. Copyright- laws to stop companies braking into other companies computer systems and stealing their developing plan or secrets and trading price. Ethics- they are issues that determine the action is right or wrong. Is some information open to public in the retailing that anyone can access? Business had to consider that before they take the action. Computer Crime- Computer Crime had been increasing due to the companies starts to join in to Internet and local area networks. Companies may set password protecting the data. w Ways that certain issues have been effected by the use of technology in retail establishments: Power- Power consumption were much higher than before due to the brought of computer system. Control- Decreasing of control by retailing due to the freedom of users. But retailing business had control on information that can be open in public and which cannot. That was related to the freedom of information policies. Equity- some jobs can be done by people who were menially ill and companies should provide opportunity to them. The Environment- The technology had a great effect to the environment, such as power saving computers, less radiation computer in retailing business and also air condition to change the temperature of the workplace. * Similarities between the computer based systems in the context of libraries and context of retail trade are that technology had been a great influence. Technology had been more and more advanced as time pasts to meet the needs of people. Both the context of Libraries and Retailing trade considered many factors such as privacy, nature of work, copyright, ethics and computer crime as well as power, control, equity, and the environment. * Differences between the context of Libraries and context of Retailing trade are libraries considered the flow of information to people such as help people to access to information they would like to have but retailing considered only on their own information but not others. Anthony Wu 11CS2 f:\12000 essays\technology & computers (295)\Computer Systems Analyst.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Systems Analyst I push the button, I hear a noise, the screen comes alive. My computer loads up and starts to process. I see the start screen for Windows 95, and I type in my password. Even though this takes time, I know that I will be able to do whatever I want to do without any trouble, without any glitches, without any questions. My computer is now easier to use and more user friendly because computer systems analysts have worked out the problems that many computer systems still have. It appears to me that a career choice needs to contain a number of different features. The first being: Will this area of interest mentally stimulate me as well as challenge me? The second being: Is there a way of making a living in these areas of interest? And finally: Do I enjoy the different activities within this area of interest? From the first day that I started my first computer, I have grasped the concepts quickly and with ease. But the computer as well as I will never stop growing. I have introduced myself to all topics of word processing to surfing the web. After reviewing a number of resources, I have noticed a relatively high demand for technologically integrated hardware and software positions available with companies that wish to compete with the demand for "networking". ("Computer Scientists" 95) This leads me to believe that future employment prospects will be high and of high quality pay within the next eight to ten years. The past, present, and future have and will see the computer. Since I have seen the computer, I have enjoyed the challenges and countless opportunities to gain in life from this machine. From school projects to games; from the Internet to programming languages; I have and always will feel like that little kid in the candy store. Job Description A Computer Systems Analyst decides how data are collected, prepared for computers, processed, stored, and made available for users. ("Computer Systems" COIN 1) The main achievement as a systems analyst is to improve the efficiency or create a whole new computer system that proves to be more efficient for a contracting company. When on an assignment, the analyst must meet a deadline. While striving for a deadline, he must create and comprehend many sources of information for the presentation. He must review the systems capabilities, workflow, and scheduling limitations ("Systems Analyst" 44), to determine if certain parts of the system must be modified for a new program. First, a computer programmer writes a program that he thinks will be beneficial for a certain system. He incorporates all of what he thinks is necessary. But the hard part is when the programmer runs the program. 99% of the time the program will not work, thus not creating a profit for the company. Then the analyst looks at the program. It is now his job to get rid of all of the glitches that are present. He must go over every strand of the program until the program is perfect. When the analyst is finished "chopping up" the program, he must then follow a technical procedure of data collecting, much like that of a science lab. The Dictionary of Occupational Titles says he must plan and prepare technical reports, memoranda, and instructional manuals as documentation of program development. (44) When the presentation day is near, the analyst submits the proof. He must organize and relate the data to a workflow chart and many diagrams. More often than not, an idea is always to good to be true unless the proof is there. For this new program that will go into the system, detailed operations must be laid out for the presentation. Yet, when the system hits the market, the program must be as simple as possible. A computer systems analyst must always look for the most minute points whenever a program is be reviewed. Education and Training Many people think that this is the type of a job where you must really like the concept. This is true. Many people thing that you need a great prior experience to ever make it somewhere. This is true. Many people think that you need a Bachelors degree to at least star out somewhere. This is not true. Through research, it a known fact that you don't really have to go to college to ever make it. In this particular field, a college education would be helpful to impress the employer, but for a basic analyst job, the only proof really needed to go somewhere is the Quality Assurance Institute. This awards the designation Certified Quality Analyst (CQA) to those who meet education and experience requirements, pass an exam, and endorse a code of ethics. ("Computer Scientists" 95) Linda Williams found a technical analyst at the Toledo Hospital, who went to the Total Technical Institute near Cleveland and earned his CQA. (11 -13) However, college is the best bet and a bachelors is the best reward to have after achieving the CQA. Employers almost always seek college graduates for analyst positions. Many however, have some prior experience. Many rookies are found in the small temporary agencies that need small help. The one who have really made it are in the business for at least 15 years. When in a secure professional position, an analyst will always need an upgrading just a quickly as the systems themselves do. Continuous study is necessary to keep the skills up to date. Continuing education is usually offered by employers in the form of paid time in night classes. Hardware and software vendors might also sponsor a seminar where analysts will go to gather ideas and new products. Even colleges and universities will sponsor some of these types of events. ("Computer Systems" America's 36) Environment, Hours, and Earnings Systems analysts work in offices in comfortable surroundings. They usually work about 40 hours a week - the same as other professionals and office workers. Occasionally, however, evening or weekend work may be necessary to meet deadlines according to America's 50 fastest Growing Jobs. (36) Most of the time, an analyst will live a quite lifestyle, unlike that of a lawyer or doctor. Even he has the freedoms that those occupations don't offer. The pay might decrease, but the family time increases. Although this may sound pretty basic, it is coming to the point where the common analyst will work from the everyday setting. In bed, at home, in the car and at the diner might all be places where an analyst might perform his work thanks to the technology available today. Even technical support can be done from a remote location largely in part to modems, laptops, electronic mail and even the Internet. ("Computer Scientists" 94) So as the hours per week is starting to vary because of where the work can be done, so are the earnings. The industry is growing and according to the Occupational Outlook Quarterly Chart, the industry will be the fastest growing from now until 2005. This occupation will grow so rapidly in fact, that in 2005, the number of systems analysts will have increased by 92%. To imagine that this is the only job that will practically double by the year 2005 is to think that the earnings would go up too. According to the same chart, the average weekly earning are $845. This is third only to the two obvious occupations of Lawyers, and Physicians. (48) In 1994, the median earning for a full time computer systems analyst was about $44,000. The middle 50% earned between $34,100 and $55,000. The highest tenth of all analysts earned $69,400 where those with degrees generally earn more. ("Computer Scientists" 95) It is also stated in America's 50 Fastest Growing Jobs that systems analysts working in the Northeast had the highest earnings and those working in the Midwest had the lowest earnings. (37) Works Cited "America's Fastest Growing Job Opportunities." Hispanic Times. 1996 "Computer Scientists and Systems Analysts." Occupational Outlook Handbook. Indianapolis: JIST Works Inc. pp. 93-95. "Computer Systems Analyst." COIN Educational Products. CD-ROM, 1995-96: 1-6 Farr, J. Michael. (1994). America's 50 Fasted Growing Jobs. Indianapolis: JIST Works Inc. Emch, Brian. Job Shadowing. Dana Corporation. 1996 Occupational Outlook Quarterly. Bureau of Labor Statistics. 1996. "Systems Analyst." Dictionary of Occupational Titles. US Department of Labor. 1992: p.44 Williams, Linda. Careers Without College: Computers. Princeton: Peterson's Guides. 1992. f:\12000 essays\technology & computers (295)\Computer Technician.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Technician I Believe a Computer Technician is a good career for me because I have been around computers for many years now and enjoy them. I began to learn the basics of computers from my father when I was about 9 years old. Since then I have pretty much taught myself and took off in the computer field. I now have 7 networked computers " linked together ", help run an internet provider and build web pages. About a year ago my Uncle changed jobs and now he is a Computer Technician. I have been working with him and really enjoy it. Five Tasks a Computer Technician May Perform Generally there are five tasks a Computer Technician has to perform such as : conducting research, analyzing systems, monitoring software and hardware, fixing hardware and software and designing computers. Working Conditions The working conditions of a Computer Technician varies. It depends on where and who you are working for. Usually the average working environment is indoors, quiet, temperature controlled and usually alone. Working Schedule The working hours vary as well. Computer Technician's are on call 24 hours 7 days a week due to the fact that most companies computers are running all the time and cannot wait long for their computer to be fixed. Salary The average salary for a Computer Technician is approximately $65,500 per year. To become a Computer Technician you need one or two years of technical training and you must have good math skills Which most technical and vocational schools offer. There are no licensing or exams needed to pass to become a Computer Technician. Certain personal qualities are needed to become a Computer Technician such as good eyesight, good hearing and the ability to work without supervision. Certain skills are needed as well such as how different computers function and work with others. Computer Technician employment opportunities exist now as listed in the want ad's and are going to continue to grow in the future. To become a Computer Technician you might want to pursue business courses, advanced math and computer courses during high school. To prepare myself to become a Computer Technician I am going to have to take advantage of math classes to improve my math skills and take computer classes that are being offered in high school. In my opinion a Computer Technician is the best choice for me because I have been around computers for so long, enjoy them and like solving other peoples and companies computers. f:\12000 essays\technology & computers (295)\Computer Technology.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Programming Programming a computer is almost as easy as using one and does not require you to be a math genius. People who are good at solving story problems make good programmers, and others say that artistic or musical talent is a sign of potential programmer. Various computer languages are described, and tips on choosing the right language and learning how to use it are provided. Learning how to program is actually easier than many people think. Learning to program takes about the same time as two semesters of a college course. The process of learning to program is uniquely reinforcing, because students receive immediate feedback on their screens. The programming languages Basic, Pascal, C, and Database are discussed; tips on learning the languages are offered; and a list of publishers' addresses is provided. One way of programming is rapid application development (RAD) has tremendous powers, but it is not without its limits. The two basic advantages RAD tools promise over traditional programming are shorter, more flexible development cycle and the fact that applications can be developed by a reasonably sophisticated end user. The main disadvantage is that RAD tools often require code to be written, which will result in most developers probably having to learn to program using the underlying programming language, except in the case of the simplest applications. The time gained from using a RAD tool can be immense, however: Programmers using IBM's VisualAge report the ability to create up to 80 percent of an application visually, with the last 20 percent consisting of specialized functions, which means by using and IBM program it is much easier because most of the program is graphics which is just point and click to do, and the rest is code, which really isn't much. Anyone who is willing to invest a little time and effort can now write computer programs and customize commercial applications, thanks to new software tools. People can create their own application with such programming languages as Microsoft's Visual Basic for Windows (which is about $130) or Novell's AppWare, part of its PerfectOffice suite. These products enable users to do much of their programming through point-and-click choices without memorizing many complicated commands. Programming can also be very difficult. At least one programming mistake is always made and debugging it can be very hard. Just finding where the problem is can take a long time alone, then if you fix that problem, another could occur. There was a programming involving a cancer-therapy machine, has led to loss of life, and the potential for disaster will increase as huge new software programs designed to control aircraft and the national air-traffic control system enter into use. There is currently no licensing or regulation of computer programmers, a situation that could change as internal and external pressures for safety mount. Programming these days is also hard if you don't have the right hardware and software. Limited memory, a lack of programming standards, and hardware incompatibilities contributed to this problem by making computing confusingly complicated. Computing does not have to be complicated anymore, however. Although computer environments still differ in some respects, they look and feel similar enough to ease the difficulty of moving from one machine to another and from one application to another. Improved software is helping to resolve problems of hardware incompatibility. As users spend less time learning about computers, they can spend more time learning with them. I would like to learn some of these programming languages. I am especially interested in learning Borland C++ or Visual C++. Visual Basic is all right, but I think learning a C language would be much more interesting and probably more profitable in the future. Bibliography 1. Business Week April 3, 1995 2. Byte Magazine August 1995 3. Compute Magazine June 1995 4. Compute Magazine May 1996 5. Newsweek Magazine January 29, 1995 f:\12000 essays\technology & computers (295)\Computer Viruses 3.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Information About Viruses A growing problem in using computers is computer viruses. Viruses are pesky little programs that some hacker with too much time on his hands wrote. Viruses have been known to create annoyances from displaying messages on the screen, such as "Your PC is now stoned," to completely obliterating everything that is contained on the computer's hard disk. Viruses are transferred from computer to computer in a number of ways. The most common way that users receive viruses at home are downloading programs off the internet, or off of local Bulletin Board Systems, or BBSs. These viruses are then transferred to a floppy disk when written to by the infected computer. Computers can also be infected when a disk that is infected with a virus is used to boot the computer. When this computer is infected, every one who writes a floppy disk on this computer gets their floppy disk contaminated, and risks getting this virus on their computer, which may not have good virus protection. On IBM-Compatible PCs, viruses will only infect executable programs, such as the .EXE and .COM files. On a Macintosh, any file can be contaminated. A disk can also be infected even without any files on it. These viruses are called BOOT SECTOR viruses. These viruses reside on the part of the floppy disk, or hard disk that store the information so that these disks can be used, and is loaded into memory each time the computer is booted by one of these disks. DON'T DESPAIR! Despite all of what has just been said, viruses are controllable. Their is software called Virus Protection Software. A couples of programs that have been proven to work are F-PROT and McAfee's Virus Scan. These programs scan the computer's memory and the files contained on the hard disk each time the program is executed. These programs also resided on the computer's memory. These programs will help reduce the spread of viruses. f:\12000 essays\technology & computers (295)\Computer Viruses 4.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ----------- Viruses ----------- A virus is a program that copies itself without the knowledge of the computer user. Typically, a virus spreads from one computer to another by adding itself to an existing piece of executable code so that it is executed when its host code is run. If a virus if found, you shouldn't panic or be in a hurry, and you should work systematically. Don't rush! A Viruse may be classified by it's method of concealment (hiding). Some are called stealth viruses because of the way that they hide themselves, and some polymorphic because of the way they change themselves to avoid scanners from detecting them. The most common classification relates to the sort of executable code which the virus attaches itself to. These are: ¨ Partition Viruses ¨ Boot Viruses ¨ File Viruses ¨ Overwriting Viruses As well as replicating, a virus may carry a Damage routine. There is also a set of programs that are related to viruses by virtue of their intentions, appearances, or users likely reactions. For example: ¨ Droppers ¨ Failed viruses ¨ Packagers ¨ Trojans ¨ Jokes ¨ Test files THE DAMAGE ROUTINE Damage is defined as something that you would prefer not to have happened. It is measured by the amount of time it takes to reverse the damage. Trivial damage happens when all you have to do is get rid of the virus. There may be some audio or visual effect; often there is no effect at all. Minor damage occurs when you have to replace some or all of your executable files from clean backups, or by re-installing. Remember to run FindVirus again afterwards. Moderate damage is done when a virus trashes the hard disk, scrambles the FAT, or low-level formats the drive. This is recoverable from your last backup. If you take backups every day you lose, on average, half a day's work. Major damage is done by a virus that gradually corrupts data files, so that you are unaware of what is happening. When you discover the problem, these corrupted files are also backed up, and you might have to restore a very old backup to get valid data. Severe damage is done by a virus that gradually corrupts data files, but you cannot see the corruption (there is no simple way of knowing whether the data is good or bad). And, of course, your backups have the same problem. Unlimited damage is done by a virus that gives a third party access to your network, by stealing the supervisor password. The damage is then done by the third party, who has control of the network. CLASSIFICATION OF VIRUSES Stealth Viruses If a stealth virus is in memory, any program attempting to read the file (or sector) containing the virus is fooled into believing that the virus is not there, as it is hiding. The virus in memory filters out its own bytes, and only shows the original bytes to the program. There are three ways to deal with this: 1. Cold Boot from a clean DOS floppy, and make sure that nothing on the hard disk is executed. Run any anti-virus software from floppy disk. Unfortunately, although this method is foolproof, relatively few people are willing to do it. 2. Search for known viruses in memory. All the virus scanners do this when the programs are run. 3. Use advanced programming techniques to probe the confusion that the virus causes. A process known as the "Anti-Stealth Methodology" in some scanners can be used for this. Polymorphic Viruses A polymorphic virus is one that is encrypted, and the decryptor/loader for the rest of the virus is very variable. With a polymorphic virus, two instances of the virus have no sequence of bytes in common. This makes it more difficult for scanners to detect them. Many scanners use the "Fuzzy Logic" technique and a "Generic Decryption Engine" to detect these viruses. The Partition and Partition Viruses The partition sector is the first sector on a hard disk. It contains information about the disk such as the number of sectors in each partition, where the DOS partition starts, plus a small program. The partition sector is also called the "Master Boot Record" (MBR). When a PC starts up, it reads the partition sector and executes the code it finds there. Viruses that use the partition sector modify this code. Since the partition sector is not part of the normal data storage part of a disk, utilities such as DEBUG will not allow access to it. However, it is possible to use Inspect Disk to examine the partition sector. A floppy disk does not have a partition sector. How to Remove a Partition Sector (MBR) Virus 1. Cold Boot from a clean DOS diskette. 2. Run the DOS scanner. 3. Select the drive to clean and "Repair" it. 4. Follow the instructions. The Boot Sector and Boot Sector Viruses The boot sector is the first sector on a floppy disk. On a hard disk it is the first sector of a partition. It contains information about the disk or partition, such as the number of sectors, plus a small program. When the PC starts up, it attempts to read the boot sector of a disk in drive A:. If this fails because there is no disk, it reads the boot sector of drive C:. A boot sector virus replaces this sector with its own code and moves the original elsewhere on the disk. Even a non-bootable floppy disk has executable code in its boot sector. This displays the "not bootable" message when the computer attempts to boot from the disk. Therefore, a non-bootable floppy can still contain a virus and infect a PC if it is inserted in drive A: when the PC starts up. File Viruses File viruses append or insert themselves into executable files, typically .COM and .EXE programs. A direct-action file virus infects another executable file on disk when its 'host' executable file is run. An indirect-action (or TSR - Terminate and Stay Resident) file virus installs itself into memory when its 'host' is executed, and infects other files when they are subsequently accessed. Overwriting Viruses Overwriting viruses overwrite all or part of the original program. As a result, the original program doesn't run. Overwriting viruses are not, therefore, a real problem - they are extremely obvious, and so cannot spread effectively. APPEARANCES AND INTENTIONS OF VIRUSES Droppers Droppers are programs that have been written to perform some apparently useful job but, while doing so, write a virus out to the disk. In some cases, all that they do is install the virus (or viruses). A typical example is a utility that formats a floppy disk, complete with Stoned virus installed on the boot sector. Failed Viruses Sometimes a file is found that contains a 'failed virus'. This is the result of either a corrupted 'real' virus or simply a result of bad programming on the part of an aspiring virus writer. The virus does not work - it hangs when run, or fails to infect. Many viruses have severe bugs that prevent their design goals - some will not reproduce successfully or will fail to perform their intended final actions (such as corrupting the hard disk). In general many virus authors are very poor programmers. Packagers Packagers are programs that in some way wrap something around the original program. This could be as an anti-virus precaution, or for file compression. Packagers can mask the existence of a virus inside. Trojans and Jokes A Trojan is a program that deliberately does unpleasant things, as well as (or instead of) its declared function. They are not capable of spreading themselves and rely on users copying them. A Joke is a harmless program that does amusing things, perhaps unexpectedly. We include the detection of a few jokes in the Toolkit, where people have found particular jokes that give concern or offence. Test files Test files are used to test and demonstrate anti-virus software, in the context of viruses. They are not viruses - simply small files that are recognised by the software and cause it to simulate what would happen if it had found a virus. This allows users to see what happens when it is triggered, without needing a live virus. METHODS OF REMOVING VIRUSES How to Remove a Boot Virus from a Hard Disk 1. Cold Boot from a clean DOS diskette. 2. Run the scanner. 3. Select the drive to clean and "Repair" it. An alternative method is as follows: 1. Cold Boot from a clean DOS diskette. 2. Type: SYS C: at the DOS prompt. (if C drive is infected) The clean DOS diskette should be the same version of DOS that is on the hard disk. How to Remove a Boot Virus from a Floppy 1. Cold Boot from a clean DOS diskette. 2. Run the scanner. 3. Make sure to "Replace the Boot Sector" of the floppy drive. If you find a new virus... If you have some symptoms that you think are a virus, then: 1. Format a floppy disk in the infected computer. 2. Copy any infected files to that floppy. 3. Copy your FORMAT and CHKDSK programs too. As you can see in this essay, viruses are very appalling, and since a virus spreads from one computer to another, it gets worse! Just like a contagious human virus which causes more harm, as more people are infected and more need to be treated. This same concept applies to a computer virus infecting computers continually. Also, in this essay, various techniques have been explained on how to remove and deal with computer viruses, of different types, inflicting different components in a computer. So, next time you have suspicions that your computer has been damaged by a virus, read through this essay and exercise the remedies indicated. f:\12000 essays\technology & computers (295)\Computer Viruses and their Effects on your PC.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Viruses and their Effects on your PC This page intentionally left blank Table of Contents What is a Virus? 1 HOW A VIRUS INFECTS YOUR SYSTEM: 2 HOW DOES A VIRUS SPREAD? 3 BIGGEST MYTH: "I BUY ALL OF MY PROGRAMS ON CD ROM FROM THE STORE". STORE BOUGHT SOFTWARE NEVER CONTAINS VIRUSES. 3 INFECTION (DAMAGES) 4 PROTECT YOUR COMPUTER, NOW!! 5 6 6 1 A virus is an independent program that reproduces itself. It can attach itself to other programs and make copies of itself (i.e., companion viruses). It can damage or corrupt data, or lower the performance of your system by using resources like memory or disk space. A virus can be annoying or it can cost you lots of cold hard cash. A virus is just another name for a class of programs. They do anything that another program can. The only distinguishing characteristic is the program has ability to reproduce and infect other programs. Is a computer virus similar to a human virus? Below is a chart that will show the similarities. Comparing Biological Viruses & Human Viruses Human Virus Effects Computer Virus Effects Attack specific body cells' Attack specific programs (*.com,*.exe) Modify the genetic information of a cell other than previous one. Manipulate the program: It performs tasks. New viruses grow in the infected cell itself. The infected program produces virus programs. An infected program may not exhibit symptoms for a while. The infected program can work without error for a long time. Not all cells with which the virus contact are infected. Program can be made immune against certain viruses. Viruses can mutate and thus cannot clearly be diagnosed. Virus program can modify themselves & possibly escape detection this way. Infected cells aren't infected more than once by the same cell. Programs are infected only once by most viruses. There are many ways a virus can infect you system. One way is, if the virus is a file infecting virus, when you run a file infected with that virus. This particular kind of virus can only infect if YOU run the program! This virus targets COM and EXE files, but have also been found in other executable files. some viruses are memory resident which will infect every file run after that one. Other are "direct action" injectors that immediately infect other files on your hard drive then leave. Another way viruses infect your system is if they are polymorphic. Polymorphism is where the virus changes itself with every infection so it is harder to find. Also, virus writers have come up with a virus called a multipartite virus. This virus can infect boot sectors and the master boot record as well as files therefore enables it to attack more targets, spread further and thus do more damage. A computer virus can be spread in many different ways. The first way is by a person knowingly installing a virus onto a computer. Now the computer is infected with a virus. The second way is inserting your disk into an infected computer. The infected computer will duplicate the virus onto your disk. Now your disk is a virus carrier. Any computer that comes in contact with this disk will become infected. For example, I once caught a virus from Cochise College by copying two non-infected disks, the computer was infected. What if my friend borrows an infected disk? Your friend's computer will most likely become infected the instant that he/she uses your disk into a computer. The third way, is the Internet. A lot of programs on the Internet contain live viruses. However, there seems to be countless numbers of ways to become infected. Every time you download a program from somewhere or borrow a disk from a friend, you are taking a risk of getting infected. Computer software bought in stores have been know to carry viruses. "How? CD-ROMS are non-recordable?" A virus may be installed into a computer at the time of manufacturing. In September of 1996, the September edition of Microsoft SPCD has a file infected with a virus called "Wazzu". Watch out for SIA\MKTOOLS\CASE\ED3905A.DOC. Microsoft aided the spread of Wazzu by distributing a Wazzu-infected document on the Swiss ORBITconference CD, and keeping an identical copy of the infected document on it's Swiss Website for at least five days after being notified of the problem. It is noted, by Microsoft records , that over 2 million of the infected CD's were sold. The CD's were replaced on a recall from Microsoft, however: this aided the spread of the Wazzu Virus. 2 The major damages can vary, but here are the most common: A. Fill up your P.C. with Garbage: As a virus reproduces, it takes up space. This space cannot be used by the operator As more copies of the virus are made, the memory space is lessened. B. Mess Up Files: Computer files have a fixed method of being stored. With this being the case, it is very easy for a computer virus to affect the system so some parts of the accessed files cannot be located. C. Mess Up FAT: Fat (File Allocation Table) is the method used to contain the information required about the location of files stored on a disk. Any allocation to this information can cause endless trouble D. Mess Up The Boot Sector: The boot sector is the special information found on a disk. Changing the boot sector could result in the inability of the computer to run. E. Erase The Whole Hard Drive/ Diskette: A virus can simply format a disk. This will cause you to lose all of the data stored on the formatted disk. F. Reset The Computer: Virus can reset your computer. Normally, the operator or user has to press a few keys. The virus can do this by sending codes to the operating system. G. Slowing Things Down: The object of this virus can slow down the running line of a program. This causes a computer with 100 megahertz to act like a computer with 16 megahertz. That is why a 486 or 586 computer can slow down and run as if it were a 286. As I would call it "Turtle Speed". H. Redefine Keys: The computer has been programmed to recognize certain codes with the press of certain keys. For Example: When you press the letter T, your computer puts a T on your display. A virus can change the command. Imagine if every time you pressed the T, your computer would format your hard drive. I. Lock The Keyboard: Redefining all the keys into an empty key. Then the user cannot use the keyboard to input any data. People are often telling me I am paranoid of viruses. Some forms of paranoia are healthy. When it comes to securing your system from viruses, trust no one, not even your mother-when you change disks with her, that is. Thank god for the invention of Anti-Virus Software. Anti-Virus Software is a program that can protect your PC from a virus. They can also remove a virus, once it is detected. However, there are thousands of viruses in existence. And finding a consistant virus scanning program can be rough. I have read many articles on popular virus scanning programs. I have found the top two virus scanning programs to be: #1.) McAfee Virus Scan #2.) Norton Anti-Virus Both of these programs can prevent a virus from entering your computer. If one sneaks past, then you will have a choice to delete the file, clean the virus or move the virus. I would highly suggest you to check out these programs and test them. Conclusion: Remember, one virus can shred many years of work on your computer. Protect yourself and always, use an Anti-Virus Program. f:\12000 essays\technology & computers (295)\Computer Viruses.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ COMPUTER VIRUSES (anonymous) <1>WHAT IS A COMPUTER VIRUS: The term usually used to define a computer virus is: ' A computer virus is often malicious software which replicates itself' [ Podell 1987 for similar definition ] - COMPUTER VIRUSES ARE BASICALLY PROGRAMS, LIKE A SPREADSHEET OR A WORD PROCESSOR. - PROGRAMS WHICH CAN INSERT EXECUTABLE COPIES OF ITSELF INTO OTHER PROGRAMS. - PROGRAMS THAT MANIPULATES PROGRAMS, MODIFIES OTHER PROGRAMS AND REPRODUCE ITSELF IN THE PROCESS. Comparing Biological viruses & Computer viruses ************************************************************* * Attack specific * Attack specific * * body cells * programs (*.COM *.EXE) * ************************************************************* * Modify the genetic information * Manipulate the program: * * of a cell other than previous 1* It performs tasks * ************************************************************* * New viruses grow in the * The infected program produces * * infected cell itself * virus programs * ************************************************************* * Infected cells aren't infected * Program are infected only once* * more than once by the same cell* by most programs* ************************************************************* * An infected organism may not * The infected program can work * * exhibit symptoms for a while * without error for a long time * ************************************************************* * Not all cells with which the * Program can be made immune * * virus contact are infected * against certain viruses * ************************************************************* * Viruses can mutate and thus * Virus program can modify * * cannot be clearly told apart * themselves & possibly escape * * * detection this way * ************************************************************* However, " computer virus " is just another name for a class of programs. They can do anything that another program can. The only distinguishing characteristic is the program has ability to reproduce and infect other programs. <2>WHAT KIND OF PROGRAM ARE CHARACTERIZED AS A VIRUS PROGRAM: - PROGRAM WHICH HAS CAPABILITY TO EXECUTE THE MODIFICATION ON A NUMBER OF PROGRAMS. - CAPABILITY TO RECOGNIZE A MODIFICATION PERFORMED ON A PROGRAM.(THE ABILITY TO PREVENT FURTHER MODIFICATION OF THE SAME PROGRAM UPON SUCH RECONDITION.) - MODIFIED SOFTWARE ASSUME ATTRIBUTES 1 TO 4. <3>HOW DOES A VIRUS SPREAD: A computer virus can only be put into your system either by yourself or someone else. One way in which a virus can be put into your computer is via a Trojan Horse. -TROJAN HORSE IS USUALLY CONTAMINATED IN DISKS WHICH ARE PARTICULARY PIRATED COPIES OF SOFTWARE. IT IS SIMPLY A DAMAGING PROGRAM DISGUISED AS AN INNOCENT ONE. MANY VIRUSES MAYBE HIDDEN IN IT, BUT T.H. THEMSELVES DO NOT HAVE THE ABILITY TO REPLICATE. Viruses also can be spread through a Wide Area network (WAN) or a Local Area Network (LAN) by telephone line. For example down loading a file from a local BBS. BBS(bulletin board system)-AN Electronic mailbox that user can access to send or receive massages. However, there seems to be countless numbers of ways to become infected. Every-time you down loads a program from somewhere or borrowed a disk from a friend, you are taking a risk of getting infected. <4>DAMAGES AND SIGNS OF INFECTION: a.> Fill Up your P.C. with Garbage: As a virus reproduces, it takes up space. This space cannot be used by the operator. As more copies of the virus are made, the memory space is lessened. b.> Mess Up Files: Computer files have a fixed method of being stored. With this being the case, it is very easy for a computer virus to affect the system so some parts of the accessed files cannot be located. c.> Mess Up FAT: FAT(the File Allocation Table) is the method used to contain the information required about the location of files stored on a disk. Any allocation to this information can cause endless trouble. d.> Mess Up The Boot Sector: The boot sector is the special information found on a disk. Changing the boot sector could result in the inability of the computer to run. e.> Format a Disk/ Diskette: A virus can simply format a disk as the operator would with the format or initialise command. f.> Reset The Computer: To reset the computer, the operator or the user only has to press a few keys. The virus can do this by sending the codes to the operating system. g.> Slowing Things Down: As the name implies, the object of the virus is to slow down the running line of the program. h.> Redefine Keys: The computer has been program to recognize that certain codes/ signals symbolize a certain keystroke. The virus could change the definition of these keystrokes. i.> Lock The Keyboard: redefining all keys into an empty key. <5>WHAT TO DO AFTER VIRUS ATTACKS: When signs of a virus attack have been recognized, the virus has already reproduced itself several times. Thus, to get rid of the virus, the user has to hack down and destroy each one of these copies. The easier way is to: 1. Have the original write protected back-up copy of your operating system on a diskette. 2. Power down the machine. 3. Boot up the system from the original system diskette. 4. Format the hard disk. 5. Restore all back-ups and all executable program. *If it's not effective, power down and seek for professional help* <6> TYPE OF VIRUSES: a.> OVER-WRITING VIRUSES b.> NON-OVERWRITNG VIRUSES c.> MEMORY RESENDENT VIRUSES <7>PRACTICE SAFE HEX: Viruses are a day to day reality. Different activities leads to different exposure. To protect oneself from a virus, several things can be done: 1. Avoid them in the first place. 2. Discovering and getting rid of them. 3. Repairing the damage. The simple thing that can cut down on exposure rate are to: avoid pirate software, checking programs that have been down loaded form the BBS before running them. Make sure that you have sufficient backups. <8> ANTIVIRUS PRODUCTS COMPANY: The pace at which new antiviral products have been pouring onto the market has accelerated rapidly since the major infection of 1988. Indeed, by early 1989, there were over 60 proprietary products making varied claims for effectiveness in preventing or detecting virus attacks. For: IBM PCs & Compatibles DISK DEFENDER PC SAFE McAFEE SCAN DIRECTOR TECHNOLOGIES THE VOICE CONNECTION McAFEE ASSOCIATES 906 University Place 17835 Skypark Circle 4423 Cheeney Street Evanston, IL 60201 Irvine, CA 92714 Santa Clara, CA 95054 TEL: (408) 727-4559 TEL: (714) 261-2366 TEL: (408) 988-3832 Price: $ 240.00 U.S. Price: $ 45.00 U.S. Price: $ 80.00 U.S. Class : HARD.2 Class : SOFT.1 Class: SOFT.3 For: Macintosh Plus, SE, & II(Apple) VIREX HJC SOFTWARE P.O. BOX 51816 Durham, NC 27717 TEL: (919) 490-1277 Price: $ 99.95 U.S. Class : SOFT.3 *Class 1(infection prevention class)* Most Class 1 products are unable to distinguish between an acceptable or unacceptable access to an executable program. For example, a simply DOS COPY command might cause the waring appear on screen. *Class 2(infection detection class)* All Class 2 products are able to distinguish all DOS commands. Addition to Class 1's prevention fuction, it is able to protect all COM and EXE files from infection. *Class 3(Top class)* Class 3 products are cable of both prevention & detection fuctions. And they are cable of removing the infection viruses. <1>. COMPUTER VIRUSES a high-tech disease WRITTEN BY: RALF BURGER PUBLISH BY: ABACUS, U.S.A <2>. DATA THEFT WRITTEN BY: HUGO CORNWALL PUBLISH BY: PONTING-GREEN, LONDON <3>. COMPUTER VIRUSES,WORMS,DATA DIDDLERS,KILLER -PROGRAMS, AND OTHER THREATS TO YOUR SYSTEM WRITTEN BY: JOHN McAFEE & COLIN HAYNES PUBLISH BY: ST.MARTIN'S PRESS, U.S.A ************************************************* * COMPUTER VIRUSES CRISIS * THE SECRET WORLD * * WRITTEN BY: PHILP E FITES * OF COMPUTER * ****************************** WRITTEN BY: * * COMPUTE'S COMPUTER VIRUSES * ALLAN LNNDELL * * WRITTEN BY: RALPH ROBERTS * * f:\12000 essays\technology & computers (295)\computer virusses.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ COMPUTER VIRUSES Kevan <1>WHAT IS A COMPUTER VIRUS: The term usually used to define a computer virus is: ' A computer virus is often malicious software which replicates itself' [ Podell 1987 for similar definition ] - COMPUTER VIRUSES ARE BASICALLY PROGRAMS, LIKE A SPREADSHEET OR A WORD PROCESSOR. - PROGRAMS WHICH CAN INSERT EXECUTABLE COPIES OF ITSELF INTO OTHER PROGRAMS. - PROGRAMS THAT MANIPULATES PROGRAMS, MODIFIES OTHER PROGRAMS AND REPRODUCE ITSELF IN THE PROCESS. Comparing Biological viruses & Computer viruses ************************************************************* * Attack specific * Attack specific * * body cells * programs (*.COM *.EXE) * ************************************************************* * Modify the genetic information * Manipulate the program: * * of a cell other than previous 1* It performs tasks * ************************************************************* * New viruses grow in the * The infected program produces * * infected cell itself * virus programs * ************************************************************* * Infected cells aren't infected * Program are infected only once* * more than once by the same cell* by most programs* ************************************************************* * An infected organism may not * The infected program can work * * exhibit symptoms for a while * without error for a long time * ************************************************************* * Not all cells with which the * Program can be made immune * * virus contact are infected * against certain viruses * ************************************************************* * Viruses can mutate and thus * Virus program can modify * * cannot be clearly told apart * themselves & possibly escape * * * detection this way * ************************************************************* However, " computer virus " is just another name for a class of programs. They can do anything that another program can. The only distinguishing characteristic is the program has ability to reproduce and infect other programs. <2>WHAT KIND OF PROGRAM ARE CHARACTERIZED AS A VIRUS PROGRAM: - PROGRAM WHICH HAS CAPABILITY TO EXECUTE THE MODIFICATION ON A NUMBER OF PROGRAMS. - CAPABILITY TO RECOGNIZE A MODIFICATION PERFORMED ON A PROGRAM.(THE ABILITY TO PREVENT FURTHER MODIFICATION OF THE SAME PROGRAM UPON SUCH RECONDITION.) - MODIFIED SOFTWARE ASSUME ATTRIBUTES 1 TO 4. <3>HOW DOES A VIRUS SPREAD: A computer virus can only be put into your system either by yourself or someone else. One way in which a virus can be put into your computer is via a Trojan Horse. -TROJAN HORSE IS USUALLY CONTAMINATED IN DISKS WHICH ARE PARTICULARY PIRATED COPIES OF SOFTWARE. IT IS SIMPLY A DAMAGING PROGRAM DISGUISED AS AN INNOCENT ONE. MANY VIRUSES MAYBE HIDDEN IN IT, BUT T.H. THEMSELVES DO NOT HAVE THE ABILITY TO REPLICATE. Viruses also can be spread through a Wide Area network (WAN) or a Local Area Network (LAN) by telephone line. For example down loading a file from a local BBS. BBS(bulletin board system)-AN Electronic mailbox that user can access to send or receive massages. However, there seems to be countless numbers of ways to become infected. Every-time you down loads a program from somewhere or borrowed a disk from a friend, you are taking a risk of getting infected. <4>DAMAGES AND SIGNS OF INFECTION: a.> Fill Up your P.C. with Garbage: As a virus reproduces, it takes up space. This space cannot be used by the operator. As more copies of the virus are made, the memory space is lessened. b.> Mess Up Files: Computer files have a fixed method of being stored. With this being the case, it is very easy for a computer virus to affect the system so some parts of the accessed files cannot be located. c.> Mess Up FAT: FAT(the File Allocation Table) is the method used to contain the information required about the location of files stored on a disk. Any allocation to this information can cause endless trouble. d.> Mess Up The Boot Sector: The boot sector is the special information found on a disk. Changing the boot sector could result in the inability of the computer to run. e.> Format a Disk/ Diskette: A virus can simply format a disk as the operator would with the format or initialise command. f.> Reset The Computer: To reset the computer, the operator or the user only has to press a few keys. The virus can do this by sending the codes to the operating system. g.> Slowing Things Down: As the name implies, the object of the virus is to slow down the running line of the program. h.> Redefine Keys: The computer has been program to recognize that certain codes/ signals symbolize a certain keystroke. The virus could change the definition of these keystrokes. i.> Lock The Keyboard: redefining all keys into an empty key. <5>WHAT TO DO AFTER VIRUS ATTACKS: When signs of a virus attack have been recognized, the virus has already reproduced itself several times. Thus, to get rid of the virus, the user has to hack down and destroy each one of these copies. The easier way is to: 1. Have the original write protected back-up copy of your operating system on a diskette. 2. Power down the machine. 3. Boot up the system from the original system diskette. 4. Format the hard disk. 5. Restore all back-ups and all executable program. *If it's not effective, power down and seek for professional help* <6> TYPE OF VIRUSES: a.> OVER-WRITING VIRUSES b.> NON-OVERWRITNG VIRUSES c.> MEMORY RESENDENT VIRUSES <7>PRACTICE SAFE HEX: Viruses are a day to day reality. Different activities leads to different exposure. To protect oneself from a virus, several things can be done: 1. Avoid them in the first place. 2. Discovering and getting rid of them. 3. Repairing the damage. The simple thing that can cut down on exposure rate are to: avoid pirate software, checking programs that have been down loaded form the BBS before running them. Make sure that you have sufficient backups. <8> ANTIVIRUS PRODUCTS COMPANY: The pace at which new antiviral products have been pouring onto the market has accelerated rapidly since the major infection of 1988. Indeed, by early 1989, there were over 60 proprietary products making varied claims for effectiveness in preventing or detecting virus attacks. For: IBM PCs & Compatibles DISK DEFENDER PC SAFE McAFEE SCAN DIRECTOR TECHNOLOGIES THE VOICE CONNECTION McAFEE ASSOCIATES 906 University Place 17835 Skypark Circle 4423 Cheeney Street Evanston, IL 60201 Irvine, CA 92714 Santa Clara, CA 95054 TEL: (408) 727-4559 TEL: (714) 261-2366 TEL: (408) 988-3832 Price: $ 240.00 U.S. Price: $ 45.00 U.S. Price: $ 80.00 U.S. Class : HARD.2 Class : SOFT.1 Class: SOFT.3 For: Macintosh Plus, SE, & II(Apple) VIREX HJC SOFTWARE P.O. BOX 51816 Durham, NC 27717 TEL: (919) 490-1277 Price: $ 99.95 U.S. Class : SOFT.3 *Class 1(infection prevention class)* Most Class 1 products are unable to distinguish between an acceptable or unacceptable access to an executable program. For example, a simply DOS COPY command might cause the waring appear on screen. *Class 2(infection detection class)* All Class 2 products are able to distinguish all DOS commands. Addition to Class 1's prevention fuction, it is able to protect all COM and EXE files from infection. *Class 3(Top class)* Class 3 products are cable of both prevention & detection fuctions. And they are cable of removing the infection viruses. <1>. COMPUTER VIRUSES a high-tech disease WRITTEN BY: RALF BURGER PUBLISH BY: ABACUS, U.S.A <2>. DATA THEFT WRITTEN BY: HUGO CORNWALL PUBLISH BY: PONTING-GREEN, LONDON <3>. COMPUTER VIRUSES,WORMS,DATA DIDDLERS,KILLER -PROGRAMS, AND OTHER THREATS TO YOUR SYSTEM WRITTEN BY: JOHN McAFEE & COLIN HAYNES PUBLISH BY: ST.MARTIN'S PRESS, U.S.A ************************************************* * COMPUTER VIRUSES CRISIS * THE SECRET WORLD * * WRITTEN BY: PHILP E FITES * OF COMPUTER * ****************************** WRITTEN BY: * * COMPUTE'S COMPUTER VIRUSES * ALLAN LNNDELL * * WRITTEN BY: RALPH ROBERTS * * ******************** f:\12000 essays\technology & computers (295)\Computers 2.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Tim Gash 1 CRS-07 Mr. Drohan January 31, 1997 The History and Future of Computers With the advances in computer technology it is now possible for more and more Canadians to have personal computers in their homes. With breakthroughs in computer processing speeds and with computer storage capacity, the combination of this with the reduced size of the computer have allowed for even the smallest apartment to hold a computer. In the past the only places to have computers were military institutes and some universities; this was because of their immense size and price. Today with falling computer prices and the opportunity to access larger networks, the amount of computers has grown from just 10% in 1986 to 25% in 1994. Also, of the 25%, 34% of them were equipped with modems, which allow for connection to on line services via telephone lines. The primitive start of the computer came about around 4000 BC; with the invention of the abacus, by the Chinese. It was a rack with beads strung on wires that could be moved to make calculations. The first digital computer is usually accredited to Blaise Pascal. In 1642 he made the device to aid his father, who was a tax collector. In 1694 Gottfried Leibniz improved the machine so that with the rearrangement of a few parts it could be used to multiply. The next logical advance came from Thomas of Colmar in 1890, who produced a machine that could perform all of the four basic operations, addition, subtraction, multiplication and division. With the added versatility this device was in operation up until the First World War. Thomas of Colmar made the common calculator, but the real start of computers as they are known today comes from Charles Babbage. Babbage designed a machine that he called a Difference Engine. It was designed to make many long calculations automatically and print out the results. A working model was built in 1822 and fabrication began in Gash 2 1823. Babbage works on his invention for 10 years when he lost interest in it. His loss of interest was caused by a new idea he thought up. The Difference Engine was limited in adaptability as well as applicability. The new idea would be a general purpose, automatic mechanical digital computer that would be fully program controlled. He called this the Analytical Engine. It would have Conditional Control Transfer Capability so that commands could be inputted in any order, not just the way that it had been programmed. The machine was supposed to use punch cards which were to be read into the machine from several reading stations. The machine was supposed to operate automatically by steam power and only require one person there to operate it. Babbages machines were never completed for reasons such as, non-precise machining techniques, the interest of few people and the steam power required for the devices was not readily available. The next advance in computing came from Herman Hollerith and James Powers. They made devices that were able to read cards that information had been punched into, automatically. This advance was a huge step, because it provided memory storage capability. Companies such as IBM and Remington made improved versions of the machine that lasted for over fifty years. ENIAC which was thought up in 1942, was in use from 1946 to 1955. Thought up by J. Presper Eckert and his associates. The computer was the first high-speed digital computer and was one thousand times faster than its predecessor, the relay computers. ENIAC was very bulky, taking up 1,800 square feet on the floor and having 18,000 vacuum tubes. It was also very limited in programmability, but it was very efficient in the programs that it had been designed for. In 1945 John von Neumann along with the University of Pennsylvania came up with what is known as the stored-program technique. Also due to the increasing speed of the computer subroutines needed to be repeated so that the computer could be kept busy. It Gash 3 would also be better if instructions to the computer could be changed during a compution so that there would be a different outcome in the compution. Neumann fulfilled these needs by creating a command that is called a conditional control transfer. The conditional control transfer allows for program sequences to be started and stopped at any time. Instruction programs were also stored together so that they can be arithmetically changed just like data. This generation of computers included ones using RAM, as well as the first commercially available computers, EDVAC and UNIVAC. These computers used punched-card or punched tape reading devices. Also some of the later ones were only about the size of a grand piano and contained 2,500 electron tubes, which was much smaller than ENIAC. During the fifties and sixties the two most important advances were magnetic core memory and the transistor. These discoveries increased RAM sizes from 8,000 to 64,000 words in commercially available computers. The first supercomputers were made with this new technology. During this period successful commercial computers were made by Burroughs, IBM, Sperry-Rand, Honeywell and Control Data. These computers could now have printers, disk storage, tape storage, stored programs and memory operating systems. These computers were usually owned by industry, government and private laboratories. The next advance came in the form of a chip. Transistors and vacuum tubes created vast amounts of heat and this damaged the delicate internal parts of the computer. The heat problem was eliminated through quartz. The integrated circuit made in 1958 consisted of three components placed on a silicon disc that was made of quartz. As technology advanced more and more components were fit onto indiviual chips and this resulted in smaller and smaller computers. There was also an operating system created during this stage that allowed for many programs to be run at once, with one central program that had the ability to monitor and coordinate computer memory. f:\12000 essays\technology & computers (295)\Computers 3.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1. The virus is made up of five parts and is in the size range of 10 nm-300 nm in diameter. The first is the coat made up of protein that protects the virus to a point. Next is the head that contains the genetic material for the virus. The genetic material for a virus is DNA. The two other parts are the tail sheath and the tail fibers that are used for odd jobs. I believe that a virus is not considered to be a living creature due to the fact it is a parasitic reproducer. To me it is just like ripping up a piece of paper because it is still the same thing and it isn't carrying out any other function besides reproduction. Since the virus cannot continue to do its functions without taking from a host and being a parasite it is considered an obligated parasite. 2. The adult fern plant in its dominate generation (sporophyte) develops sporangium on one side of its leaf. When meiosis is finished inside the sporangia and the spores are completed the annulus dries out releasing the spores. The spore germinates and grows into a prothallus which is the gametophyte generation. The antheridia and the archegonia are developed on the bottom of the prothallus. The archegonia are at the notch of the prothallus and the antheridia are located near the tip. Fertilization occurs when outside moisture is present and the sperm from the antheridia swim to the eggs of the archegonia. A zygote is formed on the prothallus and a new sporophyte grows. 4. Flowering plants have unique characteristics that help them survive. One is the flower itself that contains the reproductive structures. The color of the flower helps because it may attract birds and insects that spread the plants pollen which diversify the later generation of plants. Flowers also produce fruits that protect their seeds and disperses them with the help of fruit eating animals. 5. Fungi, Animalia, and, Plantae are all believed to be evolved from Protista. All 3 of these kingdoms are eukaryotic and their cells have a nucleus and all the other organelles. Fungi live on organic material they digest, Plants produce their own organic material, and Animals go out and find their food. Animalia are heterotrophic whereas Plantae are photosynthetic. Fungi who digest their own food on the outside are different from animals who digest their food on the inside. Plants and animals both have organs systems but animals have organized muscle fibers and plants do not. 8. The Gasreopoda , Pelecypoda, and the Cephalapoda all have three of the same characteristics. The first one is the visceral mass that includes internal organs like a highly specialized digestive tract, paired kidneys, and reproductive organs. The mantle is the second one. It is a covering that doesn't completely cover the visceral mass. The last one is the foot that can be used for movement, attachment, food capture, or a combination of these. The Gastropods are the snails and slugs. They use their foot for crawling and their mantle (shell) to protect their visceral mass. The class Pelecypoda consists of clams, oysters, scallops, and mussels. These animals have two shells that are hinged together by a strong muscle and these shells protect the visceral mass. They use their foot for making threads so they can attach to things. Cephalopods consist of octopuses, squids, and nautiluses. These guys use their mantle cavity to squeeze water out and causes locomotion. The foot has evolved into tentacles around the head that are used to catch prey. Nautiluses have an external shells, squids have smaller but internal shell and octopuses lack shells entirely. 9. The word Arthropod means jointed foot which come to some of the features of an arthropod that are the jointed appendages, compound eyes, an exoskeleton, and a brain with a ventral solid nerve cord. The class Crustacea has compound eyes and five pairs of appendages two of which are sensory antenni. Some examples are shrimp, cray, lobsters, and crabs. Insecta has 900,000 species in its class. For example in a grasshopper they have compound eyes with five pair of appendages, three that are legs, one of which is for hopping, and two pairs of wings. Spiders that belong to the class Arachnidia have six pair of appendages. The first pair of appendage are modified fangs and the second pair are used for chewing. The other four are walking legs ending in claws. Spiders don't have compound eyes, instead, they have simple eyes. More examples are scorpions, ticks, mites, and chiggers. To similar classes are Diplopoda and Chilopoda because they are segmented in the same way and each segment has a pair of walking legs but in the Diplopoda some segments fuse together and seem to have two pair of legs to one segment. 10.The Phylum Chordata contains creatures that would have bilateral symmetry, well developed coelom, and segmentation. In order to be placed in this phylum they must have had a dorsal hollow nerve cord, a dorsal supporting rod called a notochord, and gill slits or pharyngeal pouches sometime in their life history. In the subphylum Urochordata the only one of the three traits they carry on into adulthood is the gill slits. In their tadpole form of their life they contained all three of these characteristics. Subphylum Cephalochordata retain all three qualifications into adult form and have segmented bodies. In subphylum Vertebrata it has all three traits as usual but its notochord is replaced by a vertebral column. 11. In these fish the sac-like lungs were placed at the end of the fishes digestive tract. In their case when the oxygen level in the water they were in was low they could still collect oxygen by breathing. After time these sac-like lungs became swim bladders that control the up and down motion of a fish. 12. The reptiles most helpful advancement in reproduction that helped them live on land was the use of internal fertilization and the ability to lay eggs that are protected by shells. The shells got rid of the swimming larva stage and the eggs did everything inside of the shell. The eggs has extraembryonic membranes that protect the embryo , get rid of wastes, and give the embryo oxygen, food, and water. Inside the shell there is a membrane called the amnion and is filled with fluid and is used as a pond where the embryo develops and keeps the embryo from drying out. 13. The three subclasses of mammalia all have hair and mammary glands that produce milk. Each of these classes also have well developed sense organs, limbs for movement, and an enlarged brain. In the subclass Prototheria the animals lay their eggs in a burrow and incubate. When the young hatch they receive milk by licking it off the modified sweat glands that are seeping milk. Subclass Metatheria the young begin developing inside the female but are born at a very immature age. The newborn crawl into their mothers pouch and begin nursing. While they are nursing they continue to develop. With the subclass Eutheria the organisms contain a placenta that exchanges maternal blood with fetal blood. The young develops inside the mothers uterus and exchanges nutrients and wastes until it is read to be born. f:\12000 essays\technology & computers (295)\Computers 4.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ omputers are used to track reservations for the airline industry, process billions of dollars for banks, manufacture products for industry, and conduct major transactions for businesses because more and more people now have computers at home and at the office. People commit computer crimes because of society's declining ethical standards more than any economic need. According to experts, gender is the only bias. The profile of today's non-professional thieves crosses all races, age groups and economic strata. Computer criminals tend to be relatively honest and in a position of trust: few would do anything to harm another human, and most do not consider their crime to be truly dishonest. Most are males: women have tended to be accomplices, though of late they are becoming more aggressive. Computer Criminals tend to usually be "between the ages of 14-30, they are usually bright, eager, highly motivated, adventuresome, and willing to accept technical challenges."(Shannon, 16:2) "It is tempting to liken computer criminals to other criminals, ascribing characteristics somehow different from 'normal' individuals, but that is not the case."(Sharp, 18:3) It is believed that the computer criminal "often marches to the same drum as the potential victim but follows and unanticipated path."(Blumenthal, 1:2) There is no actual profile of a computer criminal because they range from young teens to elders, from black to white, from short to tall. Definitions of computer crime has changed over the years as the users and misusers of computers have expanded into new areas. "When computers were first introduced into businesses, computer crime was defined simply as a form of white-collar crime committed inside a computer system."(2600:Summer 92,p.13) Some new terms have been added to the computer criminal vocabulary. "Trojan Horse is a hidden code put into a computer program. Logic bombs are implanted so that the perpetrator doesn't have to physically present himself or herself." (Phrack 12,p.43) Another form of a hidden code is "salamis." It came from the big salami loaves sold in delis years ago. Often people would take small portions of bites that were taken out of them and then they were secretly returned to the shelves in the hopes that no one would notice them missing.(Phrack 12,p.44) Congress has been reacting to the outbreak of computer crimes. "The U.S. House of Judiciary Committee approved a bipartisan computer crime bill that was expanded to make it a federal crime to hack into credit and other data bases protected by federal privacy statutes."(Markoff, B 13:1) This bill is generally creating several categories of federal misdemeanor felonies for unauthorized access to computers to obtain money, goods or services or classified information. This also applies to computers used by the federal government or used in interstate of foreign commerce which would cover any system accessed by interstate telecommunication systems. "Computer crime often requires more sophistications than people realize it."(Sullivan, 40:4) Many U.S. businesses have ended up in bankruptcy court unaware that they have been victimized by disgruntled employees. American businesses wishes that the computer security nightmare would vanish like a fairy tale. Information processing has grown into a gigantic industry. "It accounted for $33 billion in services in 1983, and in 1988 it was accounted to be $88 billion." (Blumenthal, B 1:2) All this information is vulnerable to greedy employees, nosy-teenagers and general carelessness, yet no one knows whether the sea of computer crimes is "only as big as the Gulf of Mexico or as huge as the North Atlantic." (Blumenthal,B 1:2) Vulnerability is likely to increase in the future. And by the turn of the century, "nearly all of the software to run computers will be bought from vendors rather than developed in houses, standardized software will make theft easier." (Carley, A 1:1) A two-year secret service investigation code-named Operation Sun-Devil, targeted companies all over the United States and led to numerous seizures. Critics of Operation Sun-Devil claim that the Secret Service and the FBI, which have almost a similar operation, have conducted unreasonable search and seizures, they disrupted the lives and livelihoods of many people, and generally conducted themselves in an unconstitutional manner. "My whole life changed because of that operation. They charged me and I had to take them to court. I have to thank 2600 and Emmanuel Goldstein for publishing my story. I owe a lot to the fellow hackers and fellow hackers and the Electronic Frontier Foundation for coming up with the blunt of the legal fees so we could fight for our rights." (Interview with Steve Jackson, fellow hacker, who was charged in operation Sun Devil) The case of Steve Jackson Games vs. Secret Service has yet to come to a verdict yet but should very soon. The secret service seized all of Steve Jackson's computer materials which he made a living on. They charged that he made games that published information on how to commit computer crimes. He was being charged with running a underground hack system. "I told them it was only a game and that I was angry and that was the way that I tell a story. I never thought Hacker [Steve Jackson's game] would cause such a problem. My biggest problem was that they seized the BBS (Bulletin Board System) and because of that I had to make drastic cuts, so we laid of eight people out of 18. If the Secret Service had just come with a subpoena we could have showed or copied every file in the building for them."(Steve Jackson Interview) Computer professionals are grappling not only with issues of free speech and civil liberties, but also with how to educate the public and the media to the difference between on-line computer experimenters. They also point out that, while the computer networks and the results are a new kind of crime, they are protected by the same laws and freedom of any real world domain. "A 14-year old boy connects his home computer to a television line, and taps into the computer at his neighborhood bank and regularly transfers money into his personnel account."(2600:Spring 93,p.19) On paper and on screens a popular new mythology is growing quickly in which computer criminals are the 'Butch Cassidys' of the electronic age. "These true tales of computer capers are far from being futuristic fantasies."(2600:Spring 93:p.19) They are inspired by scores of real life cases. Computer crimes are not just crimes against the computer, but it is also against the theft of money, information, software, benefits and welfare and many more. "With the average damage from a computer crime amounting to about $.5 million, sophisticated computer crimes can rock the industry."(Phrack 25,p.6) Computer crimes can take on many forms. Swindling or stealing of money is one of the most common computer crime. An example of this kind of crime is the Well Fargo Bank that discovered an employee was using the banks computer to embezzle $21.3 million, it is the largest U.S. electronic bank fraud on record. (Phrack 23,p.46) Credit Card scams are also a type of computer crime. This is one that fears many people and for good reasons. A fellow computer hacker that goes by the handle of Raven is someone who uses his computer to access credit data bases. In a talk that I had with him he tried to explain what he did and how he did it. He is a very intelligent person because he gained illegal access to a credit data base and obtained the credit history of local residents. He then allegedly uses the residents names and credit information to apply for 24 Mastercards and Visa cards. He used the cards to issue himself at least 40,000 in cash from a number of automatic teller machines. He was caught once but was only withdrawing $200 and in was a minor larceny and they couldn't prove that he was the one who did the other ones so he was put on probation. "I was 17 and I needed money and the people in the underground taught me many things. I would not go back and not do what I did but I would try not to get caught next time. I am the leader of HTH (High Tech Hoods) and we are currently devising other ways to make money. If it weren't for my computer my life would be nothing like it is today."(Interview w/Raven) "Finally, one of the thefts involving the computer is the theft of computer time. Most of us don't realize this as a crime, but the congress consider this as a crime."(Ball,V85) Everyday people are urged to use the computer but sometimes the use becomes excessive or improper or both. For example, at most colleges computer time is thought of as free-good students and faculty often computerizes mailing lists for their churches or fraternity organizations which might be written off as good public relations. But, use of the computers for private consulting projects without payment of the university is clearly improper. In business it is the similar. Management often looks the other way when employees play computer games or generate a Snoopy calendar. But, if this becomes excessive the employees is stealing work time. And computers can only process only so many tasks at once. Although considered less severe than other computer crimes such activities can represent a major business loss. "While most attention is currently being given to the criminal aspects of computer abuses, it is likely that civil action will have an equally important effect on long term security problems."(Alexander, V119) The issue of computer crimes draw attention to the civil or liability aspects in computing environments. In the future there may tend to be more individual and class action suits. CONCLUSION Computer crimes are fast and growing because the evolution of technology is fast, but the evolution of law is slow. While a variety of states have passed legislation relating to computer crime, the situation is a national problem that requires a national solution. Controls can be instituted within industries to prevent such crimes. Protection measures such as hardware identification, access controls software and disconnecting critical bank applications should be devised. However, computers don't commit crimes; people do. The perpetrator's best advantage is ignorance on the part of those protecting the system. Proper internal controls reduce the opportunity for fraud. BIBLIOGRAPHY Alexander, Charles, "Crackdown on Computer Capers," Time, Feb. 8, 1982, V119. Ball, Leslie D., "Computer Crime," Technology Review, April 1982, V85. Blumenthal,R. "Going Undercover in the Computer Underworld". New York Times, Jan. 26, 1993, B, 1:2. Carley, W. "As Computers Flip, People Lose Grip in Saga of Sabatoge at Printing Firm". Wall Street Journal, Aug. 27, 1992, A, 1:1. Carley, W. "In-House Hackers: Rigging Computers for Fraud or Malice Is Often an Inside Job". Wall Street Journal, Aug 27, 1992, A, 7:5. Markoff, J. "Hackers Indicted on Spy Charges". New York Times, Dec. 8, 1992, B, 13:1. Finn, Nancy and Peter, "Don't Rely on the Law to Stop Computer Crime," Computer World, Dec. 19, 1984, V18. Phrack Magazine issues 1-46. Compiled by Knight Lightning and Phiber Optik. Shannon, L R. "THe Happy Hacker". New York Times, Mar. 21, 1993, 7, 16:2. Sharp, B. "The Hacker Crackdown". New York Times, Dec. 20, 1992, 7, 18:3. Sullivan, D. "U.S. Charges Young Hackers". New York Times, Nov. 15, 1992, 1, 40:4. 2600: The Hacker Quarterly. Issues Summer 92-Spring 93. Compiled by Emmanuel Goldstein. f:\12000 essays\technology & computers (295)\Computers and Marketing.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ COMPUTERS AND MARKETING Marketing is the process by which goods are sold and purchased. The aim of marketing is to acquire, retain, and satisfy customers. Modern marketing has evolved into a complex and diverse field. This field includes a wide variety of special functions such as advertising, mail-order business, public relations, retailing and merchandising, sales, market research, and pricing of goods. Businesses, and particularly the marketing aspect of businesses, rely a great deal on the use of computers. Computers play a significant role in inventory control, processing and handling orders, communication between satelite companies in an organization, design and production of goods, manufacturing, product and market analysis, advertising, producing the company newsletter, and in some cases, complete control of company operations. In today's extremely competitive business environment businesses are searching for ways to improve profitability and to maintain their position in the marketplace. As competition becomes more intense the formula for success becomes more difficult. Two particular things have greatly aided companies in their quests to accomplish these goals. They are the innovative software products of CAD/CAM and, last but not least, the World Wide Web. An important program has aided companies all over the world. Computer-aided design and computer-aided manufacturing (CAD/CAM) is the integration of two technologies. It has often been called the new industrial revolution. In CAD, engineers and designers use specialized computer software to create models that represent characteristics of objects. These models are analyzed by computer and redesigned as necessary. This allows companies needed flexibility in studying different and daring designs without the high costs of building and testing actual models, saving millions of dollars. In CAM, designers and engineers use computers for planning manufacturing processes, testing finished parts, controlling manufacturing operations, and managing entire plants. CAM is linked to CAD through a database that is shared by design and manufacturing engineers. The major applications of CAD/CAM are mechanical design and electronic design. Computer-aided mechanical design is usually done with automated drafting programs that use interactive computer graphics. Information is entered into the computer to create basic elements such as circles, lines, and points. Elements can be rotated, mirrored, moved, and scaled, and users can zoom in on details. Computerized drafting is quicker and more accurate than manual drafting. It makes modifications much easier. Desktop manufacturing enables a designer to construct a model directly from data which is stored in computer memory. These software programs help designers to consider both function and manufacturing consequences at early stages, when designs are easily modified. More and more manufacturing businesses are integrating CAD/CAM with other aspects of production, including inventory tracking, scheduling, and marketing. This idea, known as computer-integrated manufacturing (CIM), speeds processing of orders, adds to effective materials management, and creates considerable cost savings. In addition to designing and manufacturing a product, a company must be effectively able to advertise, market, and sell its product. Much of what passes for business is nothing more than making connections with other people. What if you could passout your business card to thousands, maybe millions of potential clients and partners? You can, twenty four hours a day, inexpensively and simply on the World Wide Web. Firms communicate with their customers through various types of media. This media usually follows passive one-to-many communication where a firm reaches many current and potential customers through marketing efforts that allow limited forms of feedback on the part of the customer. For several years a revolution has been developing that is dramatically changing the traditional form of advertising and communication media. This revolution is the Internet, a massive global network of interconnected computer networks which has the potential to drastically change the way firms do business with their customers. The World Wide Web is a hypertext based information service. It provides access to multimedia, complex documents, and databases. The Web is one of the most effective vehicles to provide information because of its visual impact and advanced features. It can be used as a complete presentation media for a company's corporate information or information on all of its products and services. The recent growth of the world wide web (WWW) has opened up new markets and shattered boundaries to selling to a worldwide audience. For marketers the world wide web can be used to creat a client base, for product and market analysis, rapid information access, wide scale information dissemination, rapid communication, cost-effective document transfers, expert advise and help, recruiting new employees, peer communi- cations, and new business opportunities. The usefullness of the Internet or WWW depends directly on the products or services of each business. There are different benefits depending upon the type of business and whether you are a supplier, retailer, or distributor. Lets examine these in more detail. Finding new clients and new client bases is not always an easy task. This process involves a careful market analysis, product marketing and consumer base testing. The Internet is a ready base of several million people from all walks of life. One can easily find new customers and clients from this massive group, provided that your presence on the internet is known. If you could keep your customer informed of every reason why they should do business with you, your business would definitely increase. Making business information available is one of the most inportant ways to serve your customers. Before people decide to become customers, they want to know about your company, what you do and what you can do for them. This can be accomplished easily and inexpensively on the World Wide Web. Many users also do product analyses and comparisons and report their findings via the World Wide Web. Quite frequently one can find others who may be familiar with a product that you are currently testing. A company can get first hand reports on the functionality of such products before spending a great deal of money. Also, the large base of Internet users is a principle area for the distribution of surveys for an analysis of the market for a new product of service ideas. These surveys can reach millions of people and potential clients with very little effort on the part of the surveyors. Once a product is already marketed, you can examine the level of satisfaction that users have received from the product. Getting customer feedback can lead to new and improved products. Feedback will let you know what customers think of your product faster, easier and much less expensively than any other market you may reach. For the cost of a page or two of Web programming, you can have a crystal ball into where to position your product or service in the marketplace. Accessing information over the Internet is much faster on most occasions than transmissions and transfers via fax or postal courier services. You can access information and data from countries around the world and make interactive connections to remote computer systems just about anywhere in the world. Electronic mail has also proved to be an effective solution to the problem of telephone tag. Contacting others through email has provided a unique method of communication which has the speed of telephone conversations, yet still provides the advantages of postal mail. Email can be sent from just about anywhere that there is an Internet service or access so that businessmen or travelers can keep in touch with up to the minute details of the office. Another benefit of the World Wide Web is wide scale information circulation. You can place documents on the Internet and instantly make them accessible to millions of users around the world. Hypertext documents provide an effective technique by which to present information to subscribers, clients or the general public. Creating World Wide Web documents and registering your site with larger Web sites improves the availability of the documents to a client base larger, and cheaper, than the circulation of many major newspapers and/or television medias. You may not be able to use the mail, phone system and regulation systems in all of your potential international markets. With the World Wide Web, however, you can open up a dialogue with international markets as easily as with the company accross the street. The Web is also more cost-effective than conventional advertising. Transferring on- line documents via the Internet takes a minimal amount of time, saving a great deal of money over postal or courier services which can also suffer late deliveries, losses or damage. If a document transfer fails on the Internet, you can always try again since the cost of the transfer is exactly the same. Current or potential clients are not lost due to late or absent documents. Beyond product and market analysis, there are a great number of experts on the Internet who make their presence widely known and easily accessable. Quite often you can get free advice and help with problems you might have from the same people you may otherwise pay highly for their consulting services to large organizations, magazines, and other periodicals. Researchers and business executives alike have attested to the fact that much of their communications over the Internet are with others in their line of research or field of work. Communicating with peers allows the sharing of ideas, problems and solutions among themselves. Often people find that others in their field have already created solutions for problems similar to their own. They are able to obtain advice on their own situations and create a solution based upon this shared knowledge. Many businessmen and conpanies are continuously on the look-out for new and innovative ideas as viable business ventures. Internet users are continuously coming up with such new ideas because of the available research the Internet offers and also because of the cooperative atmosphere that surrounds the internet. In addition, the Internet has many job lists and resumes online for prospective employers. New resumes are constantly posted to the Web to inform companies of the availability of new skills. As competition intensifies in the business world, consumers are faced with more and more products and services to choose from. The future of business is being decided right now in the minds and wallets of customers. The successful business and marketing approach utilizes everything possible to insure that the choice the customer makes is to choose their product or service. Computer technology is by far the most important and impressive means by which to insure a company's success. Computers play a significant role in every aspect of a company's survival, from product design and manufacturing, creating client databases, inventory control , market analysis, advertising and sales, and even total company operations. f:\12000 essays\technology & computers (295)\Computers and Society.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The decade of the 1980's saw an explosion in computer technology and computer usage that deeply changed society. Today computers are a part of everyday life, they are in their simplest form a digital watch or more complexly computers manage power grids, telephone networks, and the money of the world. Henry Grunwald, former US ambassador to Austria best describes the computer's functions, "It enables the mind to ask questions, find answers, stockpile knowledge, and devise plans to move mountains, if not worlds." Society has embraced the computer and accepted it for its many powers which can be used for business, education, research, and warfare. The first mechanical calculator, a system of moving beads called the abacus, was invented in Babylonia around 500 BC. The abacus provided the fastest method of calculating until 1642, when the French scientist Pascal invented a calculator made of wheels and cogs. The concept of the modern computer was first outlined in 1833 by the British mathematician Charles Babbage. His design of an analytical engine contained all of the necessary components of a modern computer: input devices, a memory, a control unit, and output devices. Most of the actions of the analytical engine were to be done through the use of punched cards. Even though Babbage worked on the analytical engine for nearly 40 years, he never actually made a working machine. In 1889 Herman Hollerith, an American inventor, patented a calculating machine that counted, collated, and sorted information stored on punched cards. His machine was first used to help sort statistical information for the 1890 United States census. In 1896 Hollerith founded the Tabulating Machine Company to produce similar machines. In 1924, the company changed its name to International Business Machines Corporation. IBM made punch-card office machinery that dominated business until the late 1960s, when a new generation of computers made the punch card machines obsolete. The first fully electronic computer used vacuum tubes, and was so secret that its existence was not revealed until decades after it was built. Invented by the English mathematician Alan Turing and in 1943, the Colossus was the computer that British cryptographers used to break secret German military codes. The first modern general-purpose electronic computer was ENIAC or the Electronic Numerical Integrator and Calculator. Designed by two American engineers, John Mauchly and Presper Eckert, Jr., ENIAC was first used at the University of Pennsylvania in 1946. The invention of the transistor in 1948 brought about a revolution in computer development, vacuum tubes were replaced by small transistors that generated little heat and functioned perfectly as switches. Another big breakthrough in computer miniaturization came in 1958, when Jack Kilby designed the first integrated circuit. It was a wafer that included transistors, resistors, and capacitors the major components of electronic circuitry. Using less expensive silicon chips, engineers succeeded in putting more and more electronic components on each chip. Another revolution in microchip technology occurred in 1971 when the American engineer Marcian Hoff combined the basic elements of a computer on one tiny silicon chip, which he called a microprocessor. This microprocessor the Intel 4004 and the hundreds of variations that followed are the dedicated computers that operate thousands of modern products and form the heart of almost every general-purpose electronic computer. By the mid-1970s, microchips and microprocessors had reduced the cost of the thousands of electronic components required in a computer. The first affordable desktop computer designed specifically for personal use was called the Altair 8800, first sold in 1974. In 1977 Tandy Corporation became the first major electronics firm to produce a personal computer. Soon afterward, a company named Apple Computer, founded by Stephen Wozniak and Steven Jobs, began producing computers. IBM introduced its Personal Computer, or PC, in 1981, and as a result of competition from the makers of clones the price of personal computers fell drastically. Just recently Apple Computer allowed its computers to be cloned by competitors. During this long time of computer evolution, business has grasped at the computer, hoping to use it to increase productivity and minimize costs. The computer has been put on assembly lines, controlling robots. In offices computers have popped up everywhere, sending information and allowing numbers to easily be processed. Two key words that apply today are downsizing and productivity. Companies hope the increase worker productivity, meaning less working which then allows for downsizing. The computer is supposed to be the magic wand that will make productivity shoot through the roof, but in some cases the computer was a waste of time and money. Reliance Insurance is an example of computer technology falling flat on its face, wasting a great deal of money, while producing little or no results. "Paper Free in 1983" was the slogan Reliance used because the it had just spent millions of dollars to put computers everywhere and network them. The employees had E-mail and other programs that where to eliminate paper and increase productivity. The company chiefs sat back and waited for a boom in productivity that never arrived. Other examples of the disappointments of computer are not hard to find. Citicorp bank lost $200 million dollars developing a system in the 1980's that gave up to the minute updates on oil prices. Knight-Ridder tried to develop a home shopping network on the television, and lost $50 million. Wang laboratories almost went under when they put all of their resources toward developing imaging technology that no one wanted. Ben & Jerry's ice cream put in an E-mail system and out of 200 employees less than 30% used the system. Everything attempted then is currently very common today; on-line services provide stock and commodities quotes, QVC is a home shopping channel on cable television, almost every picture in a magazine has been retouched with imaging technology, and even JRHS has an E-mail system that seems to be valuable. Other corporations have seized computer technology and used it to reduce costs, but usually the human factor is lost. The McDonalds fast food chain is an example of a company that has embraced computers to help productivity and lower operating costs. The McDonalds kitchen has become a computer timed machine, "You don't have to know how to cook, you don't have to know how to think. There's a procedure for everything and you just follow the procedure" . The workers have in essence become robots controlled by the computer to achieve maximum productivity. The computer knows the procedure and alerts the worker of events in the procedure and all the worker must do is execute what the beeper of buzzer means. With such little knowledge of the making of the food, workers have become disposable, "It takes a special kind of person to be able to move before he can think. We find people like that and use them until they quit." . McDonalds managers work even more closely with the computers that control them. The computer generates a graph of expected business and tells the manager how many people to schedule and when, all the manager does is fill in the blanks with names. McDonalds computers also keep close track of sales and expenditures, "The central office can check . . . how many Egg McMuffins were sold on Friday from 9 to 9:30 two weeks ago or two years ago, either in an entire store or at any particular register." . The main things computers do in a manual job is to speed things up, "Thinking generally slows this operation down." , and for this reason computers have made manual jobs ones of extreme monotony and no creativity. White collar jobs have remained virtually the same, computers have just helped to enhance creativity and attempted to raise productivity. E-mail, word processors, spreadsheets, and personal organization programs are widely used by white collar workers. These programs help to make impressive presentations, communicate, and keep track of everything so the worker can get more done, and therefore less workers are needed, dropping costs. This has not happened, over the last 30 years white collar worker productivity has remained the same, while blue collar productivity has almost quadrupled. This is due mainly to the fact that white collar workers are required to think and adapt to situations quickly, which computers at the moment are unable to due, they only follow code to give a planned response. The blue collar job requires less knowledge and skill, and so is easily replaceable by a computer. Computers though have not been a failure in business, they allow information to be shared very quickly. The home office is a product of computers, people can work from home instead of going into an office. This has not become very popular due to the lack of touch between people, the loss of contact. It is the human factor that helps to make business run, the random thought that saves the day, something a computer is incapable of doing. Computers may help quicken business, but they will never replace people, only reduce their knowledge or creativity by automating the process. Another form of computers is attempting to totally eliminate people from the picture. Expert systems are large mainframe computers that have the knowledge of an expert individual loaded into it, and makes decisions that are very complex. An expert in field is chosen and interviewed for sometimes over a year about their job and how they make decisions. All of this knowledge is refined and put into a computer. Another person then enters some statistics into the finished machine and magically a large printout will come out of the machine in minutes with the answers. Expert systems are used mainly in large investing corporations, but some have been developed to help diagnose diseases. The hope is one day a patient will lie down and a couple of sensors and probes will go over the body and then a computer printout will have the name of your illness and the drug to cure it. Expert systems have been used very little mainly due to their high price and because of the lack of trust in them. Computers have also reached into other places besides business, schools. Children sit in front of computers and are drilled or taught about certain subjects selected by the teacher. This method of teaching has come under fire, some people believe the computer should be a tool not a teacher, while others believe why learn from a normal teacher when a computerized version of the best can teach. The technology of today could allow for a teacher in another country to teach a class through video confrencing. The attempts to spread computer technology into the class room have produced results and taught lessons as to how computers should be applied. The Belridge school district in McKittrick California was one of the most technological school districts in America. Every student had two computers, one at school and one at home, which contained many brand new teaching programs. The high school had a low powered television station that broadcasted every day. The classes were small and parent involvement was high. Even with all of these wonderful things one-third of the first grade class was below the national average in standardized tests after the first year. Parents were enraged that after all of the money spent nothing had happened, that the technology hadn't made the children become smarter, and so all of the computers were gone the next year and traditional teaching was put back in place. Belridge is an extreme example of people expecting the computers to do magic and make the children learn faster and better, much like companies hoped to raise productivity. The children were left to learn from the computer, which they did, but nothing changed things actually got worse. One parent realized, ". . . good teachers are the heart and soul of teaching." , because computers can only present facts and explain them to a certain extent, where as a good teacher can explain to the student in many ways. The US has about 2.7 million computers for 100,000 schools, a ratio of about 1 computer for every 16 students. Experts say that, "Computers work best when students are left with a goal to achieve. . ." , and students are allowed to achieve this goal with proper direction from a teacher. After many attempts in the 1980's to put computers into the classroom a Presidential Plan was drawn up: 1. Give computers to teachers before students. 2. Move them out of the labs and into classrooms. 3. One workstation at least for every two or three students. 4. Still use flashcards for practice. 5. Give teachers time to restructure around computers. 6. Expect to wait 5 to 6 years for change. This plan was to help guide the use of computers into the classroom, and maximize their ability as learning tool. The computer will enhance the future classroom, but it cannot be expected to produce results quickly. One thing the use of computers in the classroom will help with is the fear of computers and their ability to confuse people. Early exposure to computers will help increase computer use in society years from now. The biggest network of connected computers is broadly referred to as the internet, information superhighway or electronic highway. The internet was started by the Pentagon as a way for the military to exchange information through computers using modems. Over the years the internet has evolved into a public resource containing limitless amounts of information. The main parts of the internet are FTP (file transfer protocol), gopher, telnet, IRC (internet relay chat), and the world wide web. FTP is used to download large files from one computer to another quickly. Gopher is much like the world wide web, but without the graphical interface. Telnet is a remote computer login, this is where most of the hacking occurs. The IRC is just chat boards where people meet and type in there discussions, but IRC is becoming more involved with pictures of the people and 3-D landscapes. Besides IRC, these internet applications are becoming obsolete due to the world wide web. The most popular of the internet applications is the world wide web or WWW. It is a very graphical interface which can be easily designed and is easy to navigate. The WWW contains information on everything and anything possibly imaginable. Movies, sound bytes, pictures, and other media is easily found on the WWW. It has also turned into a business venture, most large businesses have a "page" on the WWW. A "page" is a section of the WWW that has its own particular address, usually a large business will have a server with many "pages" on it. A sample internet address would be "http://www.sony.com/index.html", the http stands for hypertext transfer protocol, or how the information will be transferred. "www.sony.com" is the serve name, it is usually a mainframe computer with a T-1 up to T-3 fiber optic telephone line. The server is expensive not because of the computer but because of the telephone line, a T-1 line which transfers up to 150 megabytes of information per second costs over $1000 a month, while a T-3 line transferring 450 megabytes of information can cost over $10,000 a month. The "index.html" is the name of the page on the server, of which the server could have hundreds. The ability for all of this information has made for a virtual society. Virtual malls, virtual gambling, virtual identities, and even virtual sex have sprung up all over the internet wanting your credit card number or your First Virtual account number. First Virtual is a banking system which allows so much money to be deposited at a local bank to be spent on the internet. Much of the internet has become a large mail order catalog. With all of these numbers and accounts, questions come up about the security of a persons money and private life, which aren't easily answered. Being safe is a new craze today, protection from hackers and other people who will steal personal secrets and then rob someone blind, or protection from pornography or white supremacists or millions of other things on the internet. The recent communications bill that passed is supposed to ban pornography on the internet, but the effects aren't apparent. There are still many US "pages" with pornography that have consent pages warning the user of the pornography ahead. Even if the US citizens stopped posting pornography, other nations still can and the newsgroups are also international. Programs such as Surf Watch and Internet Nanny have become popular, blocking out pornographic sites. The main problem or beauty of the internet is the lack of a controlling party, "It has no officers, it has no policy making board or other entity, it has no rules or regulations and is not selective in terms of providing services." . This is a society run by the masses that amounts to pure anarchy, nothing can be controlled or stopped. The internet is so vast many things could be hidden and known to only a few, for a long time if not forever. The real problem with controlling the interenet is self control and responsibility, don't go and don't see what you don't want to, and if that amounts to a boring time, then don't surf the net. When speaking of computers and the internet one person cannot go unmentioned, Bill Gates, the president of Microsoft. Microsoft has a basic monopoly on the computer world, they write the operating system and then the applications to run of the system, and when everyone catches up, they change the version. Bill Gates started the company in the early 1980's with DOS, or Disk Operating System, which just recently was made obsolete by Windows 95. Bill Gates has now just ventured into the internet and is now tangling with Netscape, the company with the Internet monopoly. Netscape gives away its software for free to people who want the basic version, but a version with all of the bells and whistles can be purchased. Microsoft is hard pressed to win the internet battle, but will take a sizable chunk of Netscape business. Bill Gates will likely keep running the software industry, with his recent purchase of Lotus, a popular spreadsheet, he further cornered the market. Computers are one of the most important items society posses today. The computer will be deeply imbedded in peoples lives even more when the technology progresses more and more. Businesses will become heavily dependent as video confrencing and working from home become increasingly more feasible, so businesses will break down from large buildings into teams that communicate electronically. Schools may be taught by the best teachers possible and software may replace teachers, but that is highly unlikely. The internet will reach into lives, offering an escape from reality and an information source that is extremely vast. Hopefully society will further embrace the computer as a tool, a tool that must be tended to and assisted, not left to do its work alone. Even so computers will always be present, because the dreams of today are made with computers, planned on computers, and then assembled by computers, the only thing the computer can't do is dream, at least right now. f:\12000 essays\technology & computers (295)\Computers and the Disabled.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computers and the Disabled The computer age has changed many things for many people, but for the disabled the computer has ultimately changed their entire life. Not only has it made life exceedingly easier for all disabled age groups, it has also made them able to be more employable in the work force. Previously unemployable people can now gain the self esteem from fully supporting themselves. Computers have given them the advantages of motion were it had not previously existed. Disabled children now have the advantage to grow up knowing that they can one day be a competent adult, that won't have to rely on someone else for their every need. Windows 95 has made many interesting developments toward making life easier for the nearly blind and for the deaf, including on screen text to synthesize speech or Braille, and adaptive hardware that transforms a computers audible cues into a visual format. Computers have given the limited back their freedom to be an active part of the human race. According to the Americans with Disabilities Acts, any office that has a staff of more than fifteen people now has to provide adaptive hardware and software on their computers, so that workers with disabilities can accomplish many tasks independently. Before this Act was passed the disabled were normally passed over for jobs because of their handicap, now however employers can be assured that people with disabilities can work in the work place just like people without disabilities. The self esteem disabled individuals have gained from the experience to work and be self supporting, is immeasurable. Computerized wheelchairs have given disabled people a whole new perception on life. It has given them the mobility to go just about anywhere they want to go. It has given them the ability to explore an unknown world, and progress intellectually as well as spiritually. Computerized vans allow many disabled people to drive, by having onboard computerized lifts to place the disabled in the driver's seat. Movement sensitive hardware, as well as computerized shifting devices allows the disable to control the van with very little physical movement. Children with disabilities now have access to many computerized devices that enable them to move freely in their home as well as outside. The battery operated bigfoot truck, much like the ones that we buy for our own children to play on have been adapted and computerized for children with special needs. These trucks have been designed for even some of the most limited children to operate with ease. With the newest technology these children can now go to public schools with their peers, and have an active social life. They also are learning that there is a place in this fast paced world for them, and are teaching the rest of us that with strength and the will to succeed, all things are possible. The Windows 95 help system was designed to help users with hearing, motor and some visual disabilities, they include information on the built-in access features. The controls of these features are centralized in the Accessibility Options Control Panel. This specialized control panel lets the user activate and deactivate certain access features and customize timing and feedback for a limited individual. A program for the disabled called StickyKey helps a person who doesn't have much control over hand movement to use a computers delete command, or any other command that normally uses both hands. StickyKeys allows a disabled person to hit one key at a time so that they can access a multi-command without pressing multiple keys simultaneously, it also allows for mistakes by deleting any accidentally hit key that isn't held down for the set amount of time. To use a mouse a person needs complete control of hand movement. MouseKeys assist the disabled with the use of the arrow keys on the keyboards numeric keypad to move the mouse around the screen. ToggleKeys is another program that aids the disabled, it provides audio feedback for certain key strokes by providing high and low pitched beeps that tell the current status of Caps Lock, Number Lock, and Scroll Lock keys. Windows 95 offers several features for those with limited sight. They make a high contrast layout that can be scaled to multiple sizes for easy reading. Their program Showsounds lets you set a global flag that shows sounds in a visual format. In an age when computers seem to be used in just about every aspect in life, the disabled have found something that makes their lives more endurable. Considering the limitations that they have overcome in their everyday lives the disabled should be commended for the strength and will, that has let them overcome, at least somewhat, the difficulties the world provides. The computer age has brought them many changes and they have adapted and excelled in them. With Windows 95 and programs like it, the computer world has been brought to almost everyone, even people born with limited abilities. f:\12000 essays\technology & computers (295)\Computers I dont like computers So why cant iI get a job .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computers, I don't like Computers. So why can't I get a job? Today most jobs use computers. Employees are probably going to use one on the job. A lot of people are being refused jobs because they don't have enough(If any) Computer related experience. We are moving into the technology age whrere most everything is going to be run by computers. Is this good? That doesn't matter because people are trying to find the fastest way to get things done, and computers are the answer. One of my relatives is having trouble finding a job in a new city he just moved to. I feel sorry for him because he was not introduced to computers when he began his career. If only he was born near the technology age he might of been more succceptable to computers and would therefore have more experience with them thus having more of a chance of getting a high paying job. However computers are getting easier to operate as we speak. William Gates said that microsoft's key role of Windows 95 was to make the operating system easier for the average person to operate. My grandma is a key example, she was born way before there was any PC's or networked offices. She remembers the big punchcard monsters that she would have to insert cards into to give it instructions. But my point is that she was not exposed to a computer as everyday life. Now she is really behind so to speak in the computing world. Computers back then were huge, they were usually stored in wharehouses. The earlier ones used paper with holes in them to give it instructions. Later the pre-PC's used tape cartridges to store data on. Then came along in 1979 the first real personal computer. Apple came out on the market with the Apple PC. Two years later IBM came out with their version of the personal computer. When IBM came out with their computer they were now in the PC market. Apple's biggest mistake was not to make MS-DOS their operating system and they failed the market due to software. The computer was software driven like it is today. The computer is just a paper weight without the software to go along with it. Microsoft's success was getting into the market early wih software for the IBM personal computer. Apple was not doing this as well as IBM and microsoft was. We are now in the information age where information and computers are one. The information age is going to be responsable for most of the world's changes. In the future pc's are going to be connected as they are now, but with greater speeds, making it possible to video teleconference with your friends and co-workers. Tv will change too. Tv will be more interactive and direct, where you will be able to watch shows when you wanna watch them. So the future of computing has a far way to go before it will slowdown. I encourage everyone to be familiar with computers now rather then later when they most need it. f:\12000 essays\technology & computers (295)\Computers in modern society.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Looking around at daily life, I noticed a pattern of computer oriented devices that make life easier and allow us to be lazier. These devices are in most daily activities ranging from waking up to an alarm clock that is computerized to watching the news before going to bed on a computerized television. All of these computerized facets of our society help to increase our daily productivity and help us to do whatever it is we need to accomplish in the day. The computer age is upon us and it will continue to grow in influence until society revolves around it daily without need for improvement. In personal computers, the industry has began to create faster machines that can store much more information. For speed, the internal microprocessor has been tweaked to perform at high rates of speed. One such microprocessor is the Intel Pentium chip that is the fastest commercial microprocessor on the market. In addition to internal speed and to allow faster hook- up to the Internet, faster telephone lines, most notably the fiber optic lines, have been added, for an extra charge, to transfer data about 4 times faster than conventional phone lines (about 28,000 bits per second has been quadrupled to about 128,000 bits per second. As speed enhances, memory and storage space is needed to hold excess information. EDO RAM is a new, faster memory module that helps transfer RAM data twice as fast as normal RAM. For long term storage of large amounts of data, hard drives have been under a constant upgrade of performance, and it is not uncommon to find hard drives about 8-9 gigabytes on the market. Along with technology, an ease of use factor has been instilled in the modern day PC's. The most notable ease of use enhancement is a GUI(Graphical User Interface), which allows the user to see a choice instead of reading about the choice. This is accomplished by using pictures and windows to simplify the choices. Windows 95 and the Macintosh OS both use GUI to simplify use. Another change in technology has been in almost putting manufacturing of typewriters into extinction. Offices are more and more turning to computers instead of typewriters because the computers integrate many office tasks in one machine, most notably the use of word processors. With the use of word processors on a computer comes the use of spell check which is only offered on a few typewriters. The most growing part of the computer-oriented world is the Internet. It allows users to send electronic mail (E-mail), faster and more conveniently than conventional, or "snail" mail. In addition to text sent, the user may opt to send a program or picture attached to the letter. In addition to electronic mail, the Internet is also used to give information on almost any topics. It is a tool now common in college research because it offers millions of sources and there is no limit to finding information. It is not only a tool for research, but also a tool for business. Businesses use it to advertise and to try to sell items on-line. These companies set up their own web sites and place the sites' addresses in their television and radio ads. Business use is not limited to advertising and selling, but also to sell and buy from other companies faster than conventional methods. Technology is all around us and there are many practical applications of computer technology. For example, the government uses the superhighway to verify drivers' licenses and Social Security numbers. The Internet is used by congressional committees to conduct research related to their current problems. Technology is used in automobiles to calculate the right amount of gas to air mixture in fuel injected cars. In auto garages, the technology is used to align the wheels and to find electrical system problems. Another example is radio and television, the two most important things in many lives. These devices would not be able to do what they do without the help of mini-computers that decipher the incoming signal. On the digitized radios and televisions, there are computers that control volume level. Banks are even installing technology into their operations by using Automated Teller Machines (ATM's). These machines take the place of human tellers and process transactions faster with built in logic control to prevent overwithdrawal. Even businesses use technology during non-business hours by having automated telephones that continue to do business long after the last person went home. They accomplish this by using prerecorded messages and a logic control to allow the dialer to get information if they have a touch-tone phone. This increases business productivity with minimal maintenance costs. So by using computers, the businesses have educated their consumer without having to manually speak to them. There are many possibilities for future uses of computers to simplify daily life and enhance the life experience. The first which is already under development is highway navigation. This lets the cars drive themselves and should make the roads safer. To enhance the personal life, video phones will let you see who you are talking to. This technology will depend on how we develop data transfer and deciphering capabilities that are too expensive to use now. We will be able to use almost every major household luxury on a PC that we have now. We could watch TV, listen to the radio, and talk on the telephone. These technologies are under development and are available in limited form, except the radio which can be used in final form now. Society is changing rapidly. This change is attributed to ease of use of computerized manufactured goods. These luxuries that will become standard living tools are creating a society in which computers will rule. We will continue to develop technology until life is as automated as it could get. Almost every daily task will be computerized and computers will dominate the world. f:\12000 essays\technology & computers (295)\Computers Related to Turf Grass Industries.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Science Term paper Turfgrass Science Dennis Zedrick October 23, 1996 The field of turfgrass science, and golf course management has became very sophisticated in just the few short years that I have been involved. Much of the equipment has gone higher tech, as far as electric motors, and more computerized technology. Many golf course superintendents now are , "online via the web". If there is a question concerning a new disease or fertilizer one can log on to Texas A@M home page and hopefully find a solution to the problem. The technology in the computer field has also advanced the irrigation technology in the agriculture field. Irrigation systems can now be turned on with the touch of a button through IBM or MACINTOSH Personal computer. New computer technology will continue to make leaps and bounds for the turfgrass industry. Ransome Industries, maker of fine turgrass mowing equipment, has come out with the first electric mowing machine. I myself am not in favor of this, or I would guess anyone in the petroleum industry is either for that matter. There has been a greater demand for environmental concern along the nations coastlines, and nation wide. Most of the worlds great golf courses are located along the coasts. Ransome was banking on that an electric mowing machine would fit that need. It has been slow to catch on as of late. It's benefits are an almost quiet no noise machine. (Beard 302). Many country club members would become outraged when the superintendents would send out the greensmowers daily at 6:00 A.M. The diesel and gasoline powered engines are noisy, and would wake up many members that live along the golf course. The second benefit is no cost of gasoline or oil, and therefore no chance of a petroleum leak or spill. There downfall lies in there initial cost,"$15,000 for a gasoline triplex mower, and $20,000 for an electric powered mower. Another real downfall is that they can only mow nine holes, then they have to be charged for ten hours, rendering them useless for the rest of the day. Hopefully technology can produce an environmental friendly machine, while not putting the oil industry in a bind, " And also keep the governments hands out of the cookie jar with new environmental taxes"!!!!!! The Internet has become a very important tool to the people in the turfgrass industry. At any given time a golf course superintendent can log onto various company's home pages to learn something about their product.(Beard 101) If one day I am searching for a new fairway mower, I can bypass the phone calls and written estimates, and go strait to the information. Toro, Ransome, Jacobsen, and even John Deere all have home pages. You can inquire on a certain mower model, engine size or anything you need to know. It will list a price and even the shipping and handling and the salesman's commission. Perhaps the best part about the Internet, is all the turfgrass related information that is at your fingertips.(Beard 120) One can access the three dominating turfgrass schools in just seconds.(Beard 122) Those three schools would be Texas A@M, Mississippi State, and Oklahoma State. If it is in the middle of the summer, and there is a big tournament coming up they can be of great help. If your putting greens start to die in spots in the heat of the summer, one could log on to Texas A@M home page, and root around for some information, on what type of disease might be causing it.(Beard 420) They give identifying characteristics for each disease that is helpful in a quick diagnoses of the problem. They even offer helpful tips on what chemicals will best control the problem, and how much to spray. If that's not enough they give tips on employee management, and possible job opportunities with the college. How can the Internet and computer technology possibly make my future job any easier, I might ask. Well that is an easy question to answer. Toro, Rainbird, and Flowtronics PSI, have found a way to make water management an easy task. Automatic water irrigation systems have been around since the early seventy's. First they were run off a mechanical pin and timer system for home lawn use. This was a very reliable system, but it lacked flexibility.(Wikshire95) Next came the automatic timer systems. These run off an electronic timer from a 110 volt wall outlet. These are still in use today, and it is a very good system.(Wikshire 112) Last but not least has come the water management system run from your personal Macintosh or IBM compatible computer. The personal computer actually works as the brain for the irrigation system.(Wikshire200) You down load the program into the computer, and bam it does all the work for you. It has a water sensor located outside that tells the system to shut off if it has rained to much, or to come on if it is getting extremely dry on a hot summer day. It also can measure the amount of nitrogen, phosphorous, and potassium in the soil, if necessary. It will test the water, and tell you the amount of salt or nitrates located in the water. Once a watering program is started it is also easily changed to another program if so desired.(Wikshire202) This has benefited the turfgrass industry in many ways. It has saved superintendents from having to come and shut the irrigation off in the middle of the night if it starts raining hard. Most importantly it has saved money in the labor part of the budget. It keeps hourly employees occupied with other tasks, other than having to turn on individual sprinkler heads every day. The most popular program by far is the Rainbird Vari-Time V and VI programs.(Wikshire250) These two programs are leaps and bounds above the rest. Having knowledge of computers and computer related programs will be very beneficial to me in the turfgrass industry. The technology will benefit me and others. From new high tech electric mowing machines, to non hydraulic mowers. The Internet could be the most useful tool for me in my job. It will give me useful knowledge on what is going on in the world. Also it could help save me from a costly mistake when it comes to disease control that could cost me my job. The computer industry has also made great accomplishments when it comes to water conservation management. These programs can be downloaded into your personal computer. They are great labor savers, and most of all effective time management tools. I hope that the technology will keep advancing, and make my future job as a golf course superintendent much easier. Works Cited Beard, James. Turf Management for Golf Courses. New York: Macmillan Publishing Company, 1992. Beard, James. The Science of Agronomy. New York: Macmillan Publishing Company, 1994. Wikshire, Don, and Charles Cason. The Principles and Technology of Irrigation and Drainage. Englewood Cliffs: Prentice-Hall Inc., 1995. f:\12000 essays\technology & computers (295)\COMPUTERS.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ [System Detection: 02/14/96 - 17:08:21] Parameters "f;q;g=3;s=net", InfParams "", Flags=01004a6f SDMVer=0400.950, WinVer=0616030a, Build=00.00.0, WinFlags=00000419 LogCrash: crash log not found or invalid Devices verified: 0 Checking for: Manual Devices Checking for: Programmable Interrupt Controller QueryIOMem: Caller=DETECTPIC, rcQuery=0 IO=20-21,a0-a1 Detected: *PNP0000\0000 = [1] Programmable interrupt controller IO=20-21,a0-a1 IRQ=2 Checking for: Direct Memory Access Controller QueryIOMem: Caller=DETECTDMA, rcQuery=0 IO=0-f,81-83,87-87,89-8b,8f-8f,c0-df Detected: *PNP0200\0000 = [2] Direct memory access controller IO=0-f,81-83,87-87,89-8b,8f-8f,c0-df DMA=4 Checking for: System CMOS/Real Time Clock QueryIOMem: Caller=DETECTCMOS, rcQuery=0 IO=70-71 Detected: *PNP0B00\0000 = [3] System CMOS/real time clock IO=70-71 IRQ=8 Checking for: System Timer QueryIOMem: Caller=DETECTTIMER, rcQuery=0 IO=40-43 Detected: *PNP0100\0000 = [4] System timer IO=40-43 IRQ=0 Checking for: f:\12000 essays\technology & computers (295)\ComputerScience.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Science Even before the first computer was conceptualized, data had already been stored on hard copy medium and used with a machine. As early as 1801, the punched card was used as a control device for mechanical looms. One and one-half centuries later, IBM joined punched cards to computers, encoding binary information as patterns of small rectangular holes. Today, punch cards are rarely used with computers. Instead, they are used for a handful of train tickets and election ballots. Although some may find it surprising, a computer printout is another type of hard copy medium. Pictures, barcodes, and term papers are modern examples of data storage that can later be retrieved using optical technology. Although it consumes physical space and requires proper care, non-acidic paper printouts can hold information for centuries. If long-term storage is not of prime concern, magnetic medium can retain tremendous amounts of data and consume less space than a single piece of paper. The magnetic technology used for computer data storage is the same technology used in the various forms of magnetic tape from audiocassette to videocassette recorders. One of the first computer storage devices was the magnetic tape drive. Magnetic tape is a sequential data storage medium. To read data, a tape drive must wind through the spool of tape to the exact location of the desired information. To write, the tape drive encodes data sequentially on the tape. Because tape drives cannot randomly access or write data like disk drives, and are thus much slower, they have been replaced as the primary storage device with the hard drive. The hard drive is composed of thin layers of rigid magnetic platters stacked on top of one another like records in a jukebox, and the heads that read and write data to the spinning platters resemble the arm of a record player. Floppy disks are another common magnetic storage medium. They offer relatively small storage capacity when compared to hard drives, but unlike hard drives, are portable. Floppy disks are constructed of a flexible disk covered by a thin layer of iron oxide that stores data in the form of magnetic dots. A plastic casing protects the disk: soft for the 51/4-inch disk, and hard for the 31/2-inch disk. Magnetic storage medium, for all its advantages, only has a life expectancy of twenty years. Data can be stored on electronic medium, such as memory chips. Every modern personal computer utilizes electronic circuits to hold data and instructions. These devices are categorized as RAM (random access memory) or ROM (read-only memory), and are compact, reliable, and efficient. RAM is volatile, and is primarily used for the temporary storage of programs that are running. ROM is non-volatile, and usually holds the basic instruction sets a computer needs to operate. Electronic medium is susceptible to static electricity damage and has a limited life expectancy, but in the modern personal computer, electronic hardware usually becomes obsolete before it fails. Optical storage medium, on the other hand, will last indefinitely. Optical storage is an increasingly popular method of storing data. Optical disk drives use lasers to read and write to their medium. When writing to an optical disk, a laser creates pits on its surface to represent data. Areas not burned into pits by the laser are called lands. The laser reads back the data on the optical disk by scanning for pits and lands. There are three primary optical disk mediums available for storage: CD-ROM (compact disc read-only memory), WORM (write once read many), and rewritable optical disks. The CD-ROM is, by far, the most popular form of optical disk storage; however, CD-ROMs are read-only. At the factory, lasers are used to create a master CD-ROM, and a mold is made from the master and used to create copies. WORM drives are used almost exclusively for archival storage where it is important that the data cannot be changed or erased after it is written, for example, financial record storage. Rewritable optical disks are typically used for data backup and archiving massive amounts of data, such as image databases. Although there are many manufacturers of the data storage devices used in the modern personal computer, each fits into one of four technological classes according to the material and methods it uses to record information. Hardcopy medium existed before the invention of the computer, and magnetic medium is predominantly used today. Electronic medium is used by every computer system, and is used to store instructions or temporarily hold data. Finally, optical storage medium utilizes lasers to read and write information to a disk that lasts indefinitely. Each type of medium is suitable for certain functions that computer users require. Although they use differing technologies, they all have equal importance in the modern personal computer system. f:\12000 essays\technology & computers (295)\ComputerVirus.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ It Is Contagious "Traces of the Stealth_c Virus have been found in memory. Reboot to a clean system disk before continuing with this installation..." This was the message staring back at me from one of the computer monitors at my office. Questions raced through my mind. "Stealth_c?" "What's a system disk?" "How am I supposed to install anti-virus software if the computer system already has a virus?" As a discouraging feeling of helplessness came over me, I thought of all the people who had loaded something from disk on this box or who had used this box to access the Internet. Because there was no virus protection in the first place, it was going to be very difficult to determine how many floppy disks and hard drives had been infected. I wished I had learned about computer viruses a long time ago. What is a computer virus, anyway? Is it a computer with a cold? A computer "virus" is called a virus because of three distinct similarities to a biological virus. They are: ? They must have the ability to make copies of, or replicate, itself. ? They must have a need for a "host," or functional program to which it can attach. ? The virus must do some kind of harm to the computer system or at least cause some kind of unexpected or unwanted behavior. Sometimes computer viruses just eat up memory or display annoying messages, but the more dangerous ones can destroy data, give false information, or completely freeze up a computer. The Stealth_c virus is a boot sector virus, meaning that it resides in the boot sectors of a computer disk and loads into memory with the normal boot-up programs. The "stealth" in the name comes from the capability of this virus to possibly hide from anti-virus software. Virtually any media that can carry computer data can carry a virus. Computer viruses are usually spread by data diskettes, but can be downloaded from the Internet, private bulletin boards, or over a local area network. This makes it extremely easy for a virus to spread once it has infected a system. The aforementioned Stealth_c virus was transported by the least likely avenue; it was packaged with commercial software. This is an extremely rare occurrence, as most software companies go to great lengths to provide "clean" software. There is a huge commercial interest in keeping computers virus-free. Companies stand to lose literally thousands of dollars if they lose computer data to a virus. An immense amount of time can be lost from more productive endeavors if someone has to check or clean each computer and floppy diskette of the virus because, no matter what, it will continue to replicate itself until it uses every bit of memory available. To service this market, companies sell anti- virus software, which scans programs, searching for viruses. If one is found, a user can "kill" it by cleaning the file, delete the file itself, move the file to a disk, or ignore it. Ignoring a possible virus is an option provided because some of the newer software utilizes heuristic algorithms to detect possible viruses. This method of detection is highly effective but, because of the sensitivity of the programs, false hits can occur. It is also very important to keep your anti-virus software current. By some estimates, forty to one hundred new virus programs are written every week by less than ethical programmers. Most software companies put out new "vaccines" every month. It is like an ongoing battle, the bad guys write a new virus or even a new "species" of virus, the good guys get a copy from some poor soul whose computer has been infected, and they write a vaccine. Some of the more paranoid, or perhaps astute, have theorized that the companies writing anti- virus software and the programmers writing viruses are one in the same. However, the author of a computer virus means nothing to one whose machine has lost data or has crashed due to infection. Detecting and deleting the virus becomes the immediate action needed. This is impossible without anti-virus software, and would be much simpler if the software is already installed on a system. So, keep your computers "vaccinated,"because, it is contagious. f:\12000 essays\technology & computers (295)\Conduit Technology versus Communication.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ~sKePtic~ : hi green, wanna drink ~sKePtic~ : Hi visitor, want a drink? Ann_Organa : skep, what are u the welcome party Ann_Organa : Skep, are you the welcoming party now (Laughs out loud) ~sKePtic~ : ...maybe...trying to entertain in more ways then one J ~sKePtic~ : (Grins) Maybe, I try to entertain in more ways than one (smiles) neXus : You 2 married, cuz you sound like it...IMHO neXus : In my humble opinon, you sound like a married couple! Ann_Organa : only in your dreams, skep...he-he Ann_Organa : Only in your dreams, Skep (laughs) ~sKePtic : L ~sKePtic~ : (Frowns, in disappointment) neXus : BRB, email... neXus : Be right back, I have email to check. neXus dropped. neXus has left the chat room ~sKePtic~ : Boring now.. : - / ~sKePtic~ : It seems boring now (has a wry facial expression) Ann_Organa : one more drink for the night.. Ann_Organa : One more drink for the night, please. ~sKePtic~ : BEER.WAV sent <> ~sKePtic~ : (a sound of beer being poured is heard) Smiles happily. Ann_Organa : THX Ann_Organa : Thank you! * * * Old English? Nope. Shakespearean dialect? Not Exactly. Foreign language? Not really. Ebonics? Could be. English? Somewhat. What are the true meanings behind the symbols above? To many it is a form of communication, where symbols reflect expressions and spelling does not count. It is a dialect that continues to grow in the technological aspects of society developed and controlled by a machine we so commonly call "the computer". Has the way to communicate been degraded to the acronyms of the Internet? Or, is a new language gradually developing, one held true by a younger generation living by the standards of a machine? Language has usually followed the norm of time, expanding to the contingencies of new terms and slang expressions. Yet, time flows at an accelerated rate, as our own dialects are being challenged by technology's swift momentum. In the past, technology was kept in check with our way to communicate. Now we find our very own dialect, and ourselves, bending to the rules of technology. In order to comprehend how technology creates variation within language, we must first understand how languages spoken in the United States progressively become linguistically diverse. All languages have both dialectical variations and registral variations. These variations, or dialects, can differ in lexicon, phonology, and/or syntax from the Standard Language that we often think of as the "Correct Language", although they are not necessarily less proper than, say, "Standard English". It depends on where, by whom, and in what situation the dialect is used as to whether or not it is appropriate. Before computers, only factors of location, ethnicity, education, and age heavily influenced language. Most people are familiar with regional dialects, such as Boston, Brooklyn, or Southern. These types of variation usually occur because of immigration and settlement patterns. People with the same social class factors, education, and occupation tend to seek out others like themselves. While occupations often produce their own jargons, a person's occupation will also determine what style of speech is used. A lawyer and a laborer would not be likely to use the same dialect on the job. Likewise, a person with little education is not likely to use the same style of speech as a college professor. Customarily, those working together usually develop a certain, direct dialect that they can communicate with and ultimately understand each other. Ethnicity is also a contributing factor that produces language variation, particularly among immigrants. The rather widespread survival of dialects, such as Ebonics and Chicano English, seems to stem from the social isolation of the speakers (discrimination, segregation), that tends to make the variation more obvious. Furthermore, age factors in language variations in two ways. First, there are the generation differences. As the younger members of a speech community adopt new variants, the older members may not be affected, opting to instead to use their traditional dialects. The aged populace will communicate with the words they learned decades ago, as the younger generation communicates on the street using slang and phrases. The second way age produces change is over time, to correspond with various stages of an individual's life. This is particularly evident in teen slang. While this kind of slang does not generally hold over from one generation to the next, the teens that used it generally do not carry it into middle age, either. Technology resembles our teen generation, in that they both continue to grow. The computer has developed into a communication conduit. Casually learning to transmit and acquire human conceptions through the services and programs it can offer. Feelings of humor, excitement, and sadness have been captured in a plastic keyboard containing a jumbled alphabet. Gradually, as technology expands, more terms are thrust to the open public. The advent of IRC (Internet Relay Channels), the Internet, and chat rooms bring forth a new way for people to communicate. Without warning, laughter can be generated by over an infinite number of acronym combinations. Today, there are over 513 different characters describing human feelings of emotion and ideas in the computer age. As seen in the example chat session, feelings of sorrow, laughter, and happiness have been captured by the symbols LOL, BRB, IMHO, and THX. However, with the absence of any visual or aural communication, these electronic characters slowly strip away true emotions usually reserved by body movement gestures. Many emotions can be interpreted incorrectly, and one can gain a totally different image of what is implied. Take the feelings of laughter and love. Both are compressed into the same three-letter character: LOL. Yet, both are complete opposites of expressions. One is defined as Lots of Laughter, with the flip side being Lots of Love. How can one interpret the difference? Only with the knowledge of a certain situation or location, can true intentions be made. Interpretations like this become difficult in real life for some, while others find a knack for it in chat sessions. Instead of being the most common stereotypical computer user, who shuns all social contact and withdraws to a room to play with the computer, IRC allows a wide range of social interaction on a level unthinkable in the past. Some users stay on the computer an average of five hours a night - not to play video games, but to talk with other users all over the world. Through chatting (or real-time conferencing), friends meet over the E-mail lines. Some members fall in love without ever meeting face to face. Some E-mail subscribers have even gone through "virtual marriages" while maintaining a traditional family life on the other side of the computer monitor. Because E-mail systems are text based, communication between people who do not know or cannot see each other sometimes can be difficult. Fortunately, a system of keyboard characters has been developed to give added meaning to messages and clear up misunderstandings. Named "emoticons" or "smileys," these characteristics are used to convey pleasure, sadness, or sarcasm. Message writers use hundreds of types of smileys. Letters in place of long phrases called shorthands, also speed up communication. Some of the most widely used smiley and shorthand symbols are included in the conversation on the first page. With the introduction of these characters is also the ever-growing threat to our language. Even now, we can slowly see the rippling effects. The computer has altered our language, and now it rules the network lines of communication, held fast together by the very computers that created it. "What is a my name now? The 'Net has stripped away our identities as human beings..." A typical chat session is filled with members greeting each other, making trivial conversation, and flirting. Members send messages that include aliases in which they have created, BB jargon, typographical errors, smileys, and shorthands. For example, take our above chat session, within "The StarSide Bar and Grill." Conversation is light, as a friendly bartender always wants to give you a free, VR alcoholic drink. (Perhaps these E-mail bars are the answer to the drinking and driving problem.) In the absence of an aural or visual communication, smileys are necessary to convey feelings and emotions. When visually based VR systems become common, such a symbol set may be eliminated and not needed. Yet, chat sessions have been slowly threatening our identity as people, and is questioning our human morality. Through personal interviews, I gained a deeper insight on this new form of interaction. "What's a name now? My true identity has been ripped apart from me, and I feel helpless. Has my enriched heritage name been degraded to the nicknames of the web, like "Q-BarfMan" and "-CornHulio-." Another told me, "To chat is like practice - for the real world. Here you can talk to girls without getting nervous or embarrassed, or start a fight without ever getting a scratch." The decency of chat rooms are still being disputed and argued today. And like technology, will never rest. However, text-based systems will always be an important form of E-mail communication. For example, with an alias, no one can tell if the message writer on the other end of the line is male, female, Asian, Anglo, young, old, wheelchair-bound, or deaf. Consequently, people in an E-mail world are judged not by their physical attributes but by the content of their messages. "...because of e-mail and chat, people haven't picked up a pencil in years..." (Heslop 401) In Brent Heslop's essay "Return to Sender", he argues that E-mail has become is more advantageous to the individual then one may think. "With the Internet, E-mail has become the ultimate convenience," (402) Heslop writes. The arguments are presented through the eyes of businessmen, the speediness and relative cheapness of E-mail have more opportunities than that of lifting a pen. "E-mail programs enable users to send attachments of other documents, and transmit them in a fraction of the time it would take a courier service or U.S. postal system to deliver them." (Heslop 401) Because of the speed, reliability, and efficiency of E-mail, the postal services have lost thousands of dollars, it's delivery system ridiculed, and letters have been tentatively titled snail mail. However, downsides do occur. What happens when network servers go down and fail to deliver documents? Have we become to depend on keyboards to deliver information than that of pen in hand? And what is to become of the beauty of poetry? Can the same rhetorical sense of Shakespeare and Yeats be understood in the acronyms and abbreviations of the Internet lingo? Again, only the advancement of a single machine can determine the outcome of delivering information across each other in the near future. In his science fiction novels, William Gibson uses the word cyberspace to describe the ethereal world of the electronic highway where the unusual and unlimited communication links are available. Space on the electronic highway comprises not asphalt or concrete, but electricity and light. Writer John Perry Barlow describes cyberspace as having: ...a lot in common with the 19th Century West. It is vast, unmapped, culturally, and legally ambiguous, verbally terse (unless you happen to be a court stenographer), hard to get around in, and up for grabs...In this silent world, all conversation is typed. To enter it, one forsakes both body and place and becomes a thing of words alone....It is, of course, a perfect breeding ground for both outlaws and new ideas. (South 63) The Internet has become our brave new world. It has succeeded in fabricating a new environment of language, emotion, and expression. Everyday it threatens the existence of our language, and challenges our values as human beings. Our present world as we know it is being transformed, slowly sucked into the virtual worlds originated by computers and its creators. The Internet has become a secondary world in which we must talk to others without truly seeing them, and speak a peculiar dialect consisting of smileys, acronyms, and short hand. Ironically we have not done anything to change this, until now. "The Internet has changed my life - I no longer have one..." It is estimated that an alternation in our language occurs every minute. The Internet is the new frontier that pushes additional words into existence. However, many want to stop this rapid succession of word play, including the government. Those against it want to transform the Internet into a grammatically correct world of communication. Presently, the Internet is under scrupulous observation for indecent material by the not only the government, but the nation's communities as well. For myself this theme is a complex one, and a tad ironic. Agreed, there are many X-rated sites on the Internet which can be easy accessible by children, but wouldn't government control on a system based on information be an infringement of the First Amendment? Seems we have placed ourselves between a rock and a hard place. My explanation, part argument and solution, narrows down to the virtue of the user. The morality and values of the individual at the keyboard should ultimately decide the viewing control of one. For whose right is it to control what you want to view on the Internet? Is it mine? Yours? Someone else? Or Uncle Sam's? Even today I was placed on the hot seat when I entered an argument with someone passionately fighting for the Decency Act (the act that government will have the right to abolish what they feel is indecent on the web). First we disputed what the true definition of indecency and decency was, and how the government and CDA officials may hold a different meaning to what is decent and what is not. Surely, the ideas of what is decent have its comparable differences to a 16-year-old teenager. "The Internet may have some good aspects, but as a whole it is terrible. It is the number one method that child molesters find their victims, and one of the number one causes in the kidnapping of children exposed on the web." I shot back with, "How do you know that this is true, do you have facts? Do you surf the 'Net daily looking for child molesting sites, and if so, how decent is that? Do have any proof, if so where? If you got it off the 'Net, how decent is it, and if you said the majority of the Internet is bad, how do I know your data is reliable?" From her, all I got was silence. The next thing you know I'm talking to an empty phone, as the phone hits the receiver with disgust and anger. It will become an interesting battle, and an issue that will be long talked about in the future. Ironically those fighting for the Decency Act, have failed to see the obvious. With all the fighting circulated on pictures (such as pornography), 'Net frauds, and sites which aim to scam we have neglected to change language and its diversity found online. We have heard the expression "An image can be a thousand words," but can that apply to those placed on the Internet, especially those unacceptable. Why are we only fighting the images, should we not also fight for our language, too? If we want to stop the spread of variance, and stop the acronyms and smileys, shouldn't we fight for a language decency act first? Why have we become so blind? The injustices do not just lie in the images, but in the text as well. In order to stop change, the Internet must be reviewed as a whole, not just by pieces. Not implying that English professors should swamp Congress in order to change the written laws of the Internet. If we want to stop the torment that language receives by technology, we should first make the text decent enough to understand. Then we could make decent sense in communicating to each other what is appropriate in online images. Only then will language progress with technology. As a joint effort, not a cat and mouse game. Technology will always continue to push communication to the edge. For some, the edge is where they have to be. We find ourselves bending to the rules of communication in order to stay in constant check with friends, loved ones, and to check current news. For us to keep ahead of time itself, we have to play by the rules of technology, despite age, location, or ethnic background. But how can we eliminate the problems of technology in communication? The only answer is to wait, and let the patience of time go to work. Eventually we may lose this dialect as years go by, as younger generations continue to introduce new terms. If history repeats itself, variations in our languages may create new dialects that will replace our current ones, for the better, or worse. Discussion Page I must say, I really did enjoy this paper. Not only was the writing and research fun, but also it was exciting. How many times can you retrospect upon history, only to find out that the answer of the future of communication may actually lie in the past? What I tried to show was a two-sided argument to this theme. Especially as the Internet will face constant gridlock, now that official hearings have begun on its decency has an information service provider. Since I, like others, are heavily affected by computers and the Internet, I wanted to share some of my personal thoughts and opinions with the reader, I only hope that it gives the paper more flavor and does not distract it from the main issue. Even though there was excitement, it was also one of the most perplexing papers that I have worked on. I could have easily rambled on and on and basically made this paper around 20 pages! So many tangents opened up along the way! But, I wanted to show you (the reader) the hardcore facts, and the strength of the evidence used to back it up. What became so perplexing was they way language intermeddles with technology. It was as if I had one huge jigsaw puzzle spread out on a table and slowly I had to piece each one together to make the whole picture become visible and clear. I must say (and question) on a personal level, that technology does have its benefits, yet has it done more evil to society then good? If we zip to the past, we can see that it was technology that created the gun, the atomic bomb, and the plane that was used to carry it. And today, we build even more deadly machines using even more sophisticated technology. These new weapons of destruction, created in the hands of technology, now have a better kill ratio and better efficiency to destroy its prey. Ironically, technology can affect us on the subtle, personal levels, too. As I type this, there is a little assistant, modeled after Albert Einstein in the lower right hand corner. With my technology, I have upgraded to the newest version of my word processor, and now this little assistant guy pops up. He just sits there. Staring, looking at his clock, yawning at times, and once in a while falling asleep when all is well. When a spelling error occurs he suddenly wakes up, telling me if he should correct it or not. Is this some sort of sick joke? Have humans become so terrible and lazy, that we cannot remember to spell, so now we have to have this "virtual" assistant attend to our own errors? He stares at me now, and I wonder if he is aware that I am typing about him? It is almost scary when you think about it. So to stop myself from making this discussion page a whole other essay I'm going to stop here. I hope that I have sparked ideas and new horizons in you, as it did to me researching it and ultimately creating it. And in one last word, I want to comment on the cover page picture. Okay, so it might be a bit obscene, but it truly shows our eventually link and bond to machines and to computers. Besides, you'll never guess where I got it. (The Internet, of course.) f:\12000 essays\technology & computers (295)\Coping with Computers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CIS 101 DECEMBER 20, 1996 PROF. GARTNER COPING WITH COMPUTERS While the twentieth century has proven to be a technological revolution, there has not been a single development with as much impact on our day to day lives than that of the computer. For many, the development of the modern computer has provided more widespread business opportunities, greater production efficiency, and greater convenience at both work and home than any other innovation has provided us with. Many of the degrees earned today did not exist twenty years ago. Many of the computer sciences degrees are based on technologies that were not even developed not so long ago. The resulting situation is a work force that has been caught with their 'pants down.' For many of the senior members of this workforce, they are at a disadvantage when it comes to competing with newer college graduates in today's computer world. This article deals with the feelings of one particular person in this position. Linda Ellerbee, a journalist and author owns a television production company. She also has her own column in Windows magazine. Her experiences with modern computer technologies range from the terminals of the 1970's all the through today with the Internet and e-mail. One of her first experiences with a computer involved sending a message over the AP news wire. As it turns out, she expressed her candid opinion on some very sensitive topics at the time, including but not limited to the Vietnam War. Consequently, the AP was not amused with the message and she was fired. At the time, this incident was popular enough to make it into Newsweek magazine. Later on, she moved into television as a reporter, but now owns her own production company, Lucky Duck Productions. Here, she realized that computers act as the driving force in a technologically based industry. She also realized that the younger generations are certainly more comfortable and at home with personal computers. While running her production company, she tells of her experience with her favorite 'ghost employee.' In her efforts to find a system administrator, she was referred to Columbia University's Center for Telecommunication Research. There, she negotiated a salary via e-mail, and whenever a system needs to be set up the ghost does it over the Internet. Of course, the bill is sent with e-mail as well. As of yet, she still has never seen the system administrator. Despite her negative or unusual experiences with the technological revolution, Ellerbee admits that she does appreciate the technology that she and her office uses. She says that she has come to peace with technology, and I would have to say that her adaptation to this new system of operating is very admirable. Unfortunately, not everybody in Ellerbee's position is as adaptive to this type of change as she was. However, with children working with computers early in grade school, it will be doubtful if many upcoming professionals suffer from computer-phobia like so many do today. f:\12000 essays\technology & computers (295)\CYBER CHIPS.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ What is a V-chip? This term has become a buzz word for any discussion evolving telecommunications regulation and television ratings, but not too many reports define the new technology in its fullest form. A basic definition of the V-chip; is a microprocessor that can decipher information sent in the vertical blanking of the NTSC signal, purposefully for the control of violent or controversial subject matter. Yet, the span of the new chip is much greater than any working definition can encompass. A discussion of the V-chip must include a consideration of the technical and ethical issues, in addition to examining the constitutionally of any law that might concern standards set by the US government. Yet in the space provided for this essay, the focus will be the technical aspects and costs of the new chip. It is impossible to generally assume that the V-chip will solve the violence problem of broadcast television or that adding this little device to every set will be a first amendment infringement. We can, however, find clues through examining the cold facts of broadcast television and the impact of a mandatory regulation on that free broadcast. "Utilizing the EIA's Recommended Practice for Line 21 Data Service(EIA-608) specification, these chips decode EDS (Extended Data Services)program ratings, compare these ratings to viewer standards, and can be programmed to take a variety of actions, including complete blanking of programs." Is one definition of the V-chip from Al Marquis of Zilog Technology. The FCC or Capitol Hill has not set any standards for V-chip technology; this has allowed many different companies to construct chips that are similar yet not exact or possibly not compatible. Each chip has advantages and disadvantages for the rating's system, soon to be developed. For example, some units use onscreen programming such as VCR's and the Zilog product do, while others are considering set top options. Also, different companies are using different methods of parental control over the chip. Another problem that these new devices may incur when included in every television is a space. The NTSC signal includes extra information space known as the subcarrier and Vertical blanking interval. As explained in the quotation from Mr. Marquis, the V-chips will use a certain section of this space to send simple rating numbers and points that will be compared to the personality settings in the chip. Many new technologies are being developed for smart-TV or data broadcast on this part of the NTSC signal. Basically the V-chip will severely limit the bandwidth for high performance transmission of data on the NTSC signal. There is also to be cost to this new technology, which will be passed to consumers. Estimates are that each chip will cost six dollars wholesale and must be designed into the television's logic. The V-chip could easily push the price of televisions up by twenty five or more dollars during the first years of production. The much simpler solution of set top boxes allows control for those who need it and allow those consumers who don't to save money and use new data technology. Another cost will most definitely be levied to television advertisers for the upgrade of the transmitting equipment. Weather the V-chip encoding signal is added upstream of the transmitter or directly into uplink units and other equipment intended for broadcast; this cost will have to compensated for in advertising sales and prices. The V-chip regulation may also require another staff employee at most stations to effectively rate locally aired programs and events. All three of these questions have been addressed in minute detail. Most debate has focused upon the new rating system and its implementation. Though equally important, this doesn't deal with the ground floor concerns for the television producing and broadcasting industries. Now as members of the industry we must hold our breath until either the fed knocks the wind from free broadcast with mandatory ratings' devices, or allows the natural regulation to continue. f:\12000 essays\technology & computers (295)\Cyber rights.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Cyberspace and the American Dream: A Magna Carta for the Knowledge Age Release 1.2, August 22, 1994 This statement represents the cumulative wisdom and innovation of many dozens of people. It is based primarily on the thoughts of four "co-authors": Ms. Esther Dyson; Mr. George Gilder; Dr. George Keyworth; and Dr. Alvin Toffler. This release 1.2 has the final "imprimatur" of no one. In the spirit of the age: It is copyrighted solely for the purpose of preventing someone else from doing so. If you have it, you can use it any way you want. However, major passages are from works copyrighted individually by the authors, used here by permission; these will be duly acknowledged in release 2.0. It is a living document. Release 2.0 will be released in October 1994. We hope you'll use it is to tell us how to make it better. Do so by: o Sending E-Mail to MAIL@PFF.ORG o Faxing 202/484-9326 or calling 202/484-2312 o Sending POM (plain old mail) to 1301 K Street Suite 650 West, Washington, DC 20005 (The Progress & Freedom Foundation is a not-for-profit research and educational organization dedicated to creating a positive vision of the future founded in the historic principles of the American idea.) Preamble The central event of the 20th century is the overthrow of matter. In technology, economics, and the politics of nations, wealth -- in the form of physical resources -- has been losing value and significance. The powers of mind are everywhere ascendant over the brute force of things. In a First Wave economy, land and farm labor are the main "factors of production." In a Second Wave economy, the land remains valuable while the "labor" becomes massified around machines and larger industries. In a Third Wave economy, the central resource -- a single word broadly encompassing data, information, images, symbols, culture, ideology, and values -- is actionable knowledge. The industrial age is not fully over. In fact, classic Second Wave sectors (oil, steel, auto-production) have learned how to benefit from Third Wave technological breakthroughs -- just as the First Wave's agricultural productivity benefited exponentially from the Second Wave's farm-mechanization. But the Third Wave, and the Knowledge Age it has opened, will not deliver on its potential unless it adds social and political dominance to its accelerating technological and economic strength. This means repealing Second Wave laws and retiring Second Wave attitudes. It also gives to leaders of the advanced democracies a special responsibility -- to facilitate, hasten, and explain the transition. As humankind explores this new "electronic frontier" of knowledge, it must confront again the most profound questions of how to organize itself for the common good. The meaning of freedom, structures of self-government, definition of property, nature of competition, conditions for cooperation, sense of community and nature of progress will each be redefined for the Knowledge Age -- just as they were redefined for a new age of industry some 250 years ago. What our 20th-century countrymen came to think of as the "American dream," and what resonant thinkers referred to as "the promise of American life" or "the American Idea," emerged from the turmoil of 19th-century industrialization. Now it's our turn: The knowledge revolution, and the Third Wave of historical change it powers, summon us to renew the dream and enhance the promise. The Nature of Cyberspace The Internet -- the huge (2.2 million computers), global (135 countries), rapidly growing (10-15% a month) network that has captured the American imagination -- is only a tiny part of cyberspace. So just what is cyberspace? More ecosystem than machine, cyberspace is a bioelectronic environment that is literally universal: It exists everywhere there are telephone wires, coaxial cables, fiber-optic lines or electromagnetic waves. This environment is "inhabited" by knowledge, including incorrect ideas, existing in electronic form. It is connected to the physical environment by portals which allow people to see what's inside, to put knowledge in, to alter it, and to take knowledge out. Some of these portals are one-way (e.g. television receivers and television transmitters); others are two-way (e.g. telephones, computer modems). Most of the knowledge in cyberspace lives the most temporary (or so we think) existence: Your voice, on a telephone wire or microwave, travels through space at the speed of light, reaches the ear of your listener, and is gone forever. But people are increasingly building cyberspatial "warehouses" of data, knowledge, information and misinformation in digital form, the ones and zeros of binary computer code. The storehouses themselves display a physical form (discs, tapes, CD-ROMs) -- but what they contain is accessible only to those with the right kind of portal and the right kind of key. The key is software, a special form of electronic knowledge that allows people to navigate through the cyberspace environment and make its contents understandable to the human senses in the form of written language, pictures and sound. People are adding to cyberspace -- creating it, defining it, expanding it -- at a rate that is already explosive and getting faster. Faster computers, cheaper means of electronic storage, improved software and more capable communications channels (satellites, fiber-optic lines) -- each of these factors independently add to cyberspace. But the real explosion comes from the combination of all of them, working together in ways we still do not understand. The bioelectronic frontier is an appropriate metaphor for what is happening in cyberspace, calling to mind as it does the spirit of invention and discovery that led ancient mariners to explore the world, generations of pioneers to tame the American continent and, more recently, to man's first exploration of outer space. But the exploration of cyberspace brings both greater opportunity, and in some ways more difficult challenges, than any previous human adventure. Cyberspace is the land of knowledge, and the exploration of that land can be a civilization's truest, highest calling. The opportunity is now before us to empower every person to pursue that calling in his or her own way. The challenge is as daunting as the opportunity is great. The Third Wave has profound implications for the nature and meaning of property, of the marketplace, of community and of individual freedom. As it emerges, it shapes new codes of behavior that move each organism and institution -- family, neighborhood, church group, company, government, nation -- inexorably beyond standardization and centralization, as well as beyond the materialist's obsession with energy, money and control. Turning the economics of mass-production inside out, new information technologies are driving the financial costs of diversity -- both product and personal -- down toward zero, "demassifying" our institutions and our culture. Accelerating demassification creates the potential for vastly increased human freedom. It also spells the death of the central institutional paradigm of modern life, the bureaucratic organization. (Governments, including the American government, are the last great redoubt of bureaucratic power on the face of the planet, and for them the coming change will be profound and probably traumatic.) In this context, the one metaphor that is perhaps least helpful in thinking about cyberspace is -- unhappily -- the one that has gained the most currency: The Information Superhighway. Can you imagine a phrase less descriptive of the nature of cyberspace, or more misleading in thinking about its implications? Consider the following set of polarities: Information Superhighway / Cyberspace Limited Matter / Unlimited Knowledge Centralized / Decentralized Moving on a grid / Moving in space Government ownership / A vast array of ownerships Bureaucracy / Empowerment Efficient but not hospitable / Hospitable if you customize it Withstand the elements / Flow, float and fine-tune Unions and contractors / Associations and volunteers Liberation from First Wave / Liberation from Second Wave Culmination of Second Wave / Riding the Third Wave The highway analogy is all wrong," explained Peter Huber in Forbes this spring, "for reasons rooted in basic economics. Solid things obey immutable laws of conservation -- what goes south on the highway must go back north, or you end up with a mountain of cars in Miami. By the same token, production and consumption must balance. The average Joe can consume only as much wheat as the average Jane can grow. Information is completely different. It can be replicated at almost no cost -- so every individual can (in theory) consume society's entire output. Rich and poor alike, we all run information deficits. We all take in more than we put out." The Nature and Ownership of Property Clear and enforceable property rights are essential for markets to work. Defining them is a central function of government. Most of us have "known" that for a long time. But to create the new cyberspace environment is to create new property -- that is, new means of creating goods (including ideas) that serve people. The property that makes up cyberspace comes in several forms: Wires, coaxial cable, computers and other "hardware"; the electromagnetic spectrum; and "intellectual property" -- the knowledge that dwells in and defines cyberspace. In each of these areas, two questions that must be answered. First, what does "ownership" mean? What is the nature of the property itself, and what does it mean to own it? Second, once we understand what ownership means, who is the owner? At the level of first principles, should ownership be public (i.e. government) or private (i.e. individuals)? The answers to these two questions will set the basic terms upon which America and the world will enter the Third Wave. For the most part, however, these questions are not yet even being asked. Instead, at least in America, governments are attempting to take Second Wave concepts of property and ownership and apply them to the Third Wave. Or they are ignoring the problem altogether. For example, a great deal of attention has been focused recently on the nature of "intellectual property" -- i.e. the fact that knowledge is what economists call a "public good," and thus requires special treatment in the form of copyright and patent protection. Major changes in U.S. copyright and patent law during the past two decades have broadened these protections to incorporate "electronic property." In essence, these reforms have attempted to take a body of law that originated in the 15th century, with Gutenberg's invention of the printing press, and apply it to the electronically stored and transmitted knowledge of the Third Wave. A more sophisticated approach starts with recognizing how the Third Wave has fundamentally altered the nature of knowledge as a "good," and that the operative effect is not technology per se (the shift from printed books to electronic storage and retrieval systems), but rather the shift from a mass-production, mass-media, mass-culture civilization to a demassified civilization. The big change, in other words, is the demassification of actionable knowledge. The dominant form of new knowledge in the Third Wave is perishable, transient, customized knowledge: The right information, combined with the right software and presentation, at precisely the right time. Unlike the mass knowledge of the Second Wave -- "public good" knowledge that was useful to everyone because most people's information needs were standardized -- Third Wave customized knowledge is by nature a private good. If this analysis is correct, copyright and patent protection of knowledge (or at least many forms of it) may no longer be unnecessary. In fact, the marketplace may already be creating vehicles to compensate creators of customized knowledge outside the cumbersome copyright/patent process, as suggested last year by John Perry Barlow: "One existing model for the future conveyance of intellectual property is real-time performance, a medium currently used only in theater, music, lectures, stand-up comedy and pedagogy. I believe the concept of performance will expand to include most of the information economy, from multi-casted soap operas to stock analysis. In these instances, commercial exchange will be more like ticket sales to a continuous show than the purchase of discrete bundles of that which is being shown. The other model, of course, is service. The entire professional class -- doctors, lawyers, consultants, architects, etc. -- are already being paid directly for their intellectual property. Who needs copyright when you're on a retainer?" Copyright, patent and intellectual property represent only a few of the "rights" issues now at hand. Here are some of the others: o Ownership of the electromagnetic spectrum, traditionally considered to be "public property," is now being "auctioned" by the Federal Communications Commission to private companies. Or is it? Is the very limited "bundle of rights" sold in those auctions really property, or more in the nature of a use permit -- the right to use a part of the spectrum for a limited time, for limited purposes? In either case, are the rights being auctioned defined in a way that makes technological sense? o Ownership over the infrastructure of wires, coaxial cable and fiber-optic lines that are such prominent features in the geography of cyberspace is today much less clear than might be imagined. Regulation, especially price regulation, of this property can be tantamount to confiscation, as America's cable operators recently learned when the Federal government imposed price limits on them and effectively confiscated an estimated $___ billion of their net worth. (Whatever one's stance on the FCC's decision and the law behind it, there is no disagreeing with the proposition that one's ownership of a good is less meaningful when the government can step in, at will, and dramatically reduce its value.) o The nature of capital in the Third Wave -- tangible capital as well as intangible -- is to depreciate in real value much faster than industrial-age capital -- driven, if nothing else, by Moore's Law, which states that the processing power of the microchip doubles at least every 18 months. Yet accounting and tax regulations still require property to be depreciated over periods as long as 30 years. The result is a heavy bias in favor of "heavy industry" and against nimble, fast-moving baby businesses. Who will define the nature of cyberspace property rights, and how? How can we strike a balance between interoperable open systems and protection of property? The Nature Of The Marketplace Inexpensive knowledge destroys economies-of-scale. Customized knowledge permits "just in time" production for an ever rising number of goods. Technological progress creates new means of serving old markets, turning one-time monopolies into competitive battlegrounds. These phenomena are altering the nature of the marketplace, not just for information technology but for all goods and materials, shipping and services. In cyberspace itself, market after market is being transformed by technological progress from a "natural monopoly" to one in which competition is the rule. Three recent examples: o The market for "mail" has been made competitive by the development of fax machines and overnight delivery -- even though the "private express statutes" that technically grant the U.S. Postal Service a monopoly over mail delivery remain in place. o During the past 20 years, the market for television has been transformed from one in which there were at most a few broadcast TV stations to one in which consumers can choose among broadcast, cable and satellite services. o The market for local telephone services, until recently a monopoly based on twisted-pair copper cables, is rapidly being made competitive by the advent of wireless service and the entry of cable television into voice communication. In England, Mexico, New Zealand and a host of developing countries, government restrictions preventing such competition have already been removed and consumers actually have the freedom to choose. The advent of new technology and new products creates the potential for dynamic competition -- competition between and among technologies and industries, each seeking to find the best way of serving customers' needs. Dynamic competition is different from static competition, in which many providers compete to sell essentially similar products at the lowest price. Static competition is good, because it forces costs and prices to the lowest levels possible for a given product. Dynamic competition is better, because it allows competing technologies and new products to challenge the old ones and, if they really are better, to replace them. Static competition might lead to faster and stronger horses. Dynamic competition gives us the automobile. Such dynamic competition -- the essence of what Austrian economist Joseph Schumpeter called "creative destruction" -- creates winners and losers on a massive scale. New technologies can render instantly obsolete billions of dollars of embedded infrastructure, accumulated over decades. The transformation of the U.S. computer industry since 1980 is a case in point. In 1980, everyone knew who led in computer technology. Apart from the minicomputer boom, mainframe computers were the market, and America's dominance was largely based upon the position of a dominant vendor -- IBM, with over 50% world market-share. Then the personal-computing industry exploded, leaving older-style big-business-focused computing with a stagnant, piece of a burgeoning total market. As IBM lost market-share, many people became convinced that America had lost the ability to compete. By the mid-1980s, such alarmism had reached from Washington all the way into the heart of Silicon Valley. But the real story was the renaissance of American business and technological leadership. In the transition from mainframes to PCs, a vast new market was created. This market was characterized by dynamic competition consisting of easy access and low barriers to entry. Start-ups by the dozens took on the larger established companies -- and won. After a decade of angst, the surprising outcome is that America is not only competitive internationally, but, by any measurable standard, America dominates the growth sectors in world economics -- telecommunications, microelectronics, computer networking (or "connected computing") and software systems and applications. The reason for America's victory in the computer wars of the 1980s is that dynamic competition was allowed to occur, in an area so breakneck and pell-mell that government would've had a hard time controlling it _even had it been paying attention_. The challenge for policy in the 1990s is to permit, even encourage, dynamic competition in every aspect of the cyberspace marketplace. The Nature of Freedom Overseas friends of America sometimes point out that the U.S. Constitution is unique -- because it states explicitly that power resides with the people, who delegate it to the government, rather than the other way around. This idea -- central to our free society -- was the result of more than 150 years of intellectual and political ferment, from the Mayflower Compact to the U.S. Constitution, as explorers struggled to establish the terms under which they would tame a new frontier. And as America continued to explore new frontiers -- from the Northwest Territory to the Oklahoma land-rush -- it consistently returned to this fundamental principle of rights, reaffirming, time after time, that power resides with the people. Cyberspace is the latest American frontier. As this and other societies make ever deeper forays into it, the proposition that ownership of this frontier resides first with the people is central to achieving its true potential. To some people, that statement will seem melodramatic. America, after all, remains a land of individual freedom, and this freedom clearly extends to cyberspace. How else to explain the uniquely American phenomenon of the hacker, who ignored every social pressure and violated every rule to develop a set of skills through an early and intense exposure to low-cost, ubiquitous computing. Those skills eventually made him or her highly marketable, whether in developing applications-software or implementing networks. The hacker became a technician, an inventor and, in case after case, a creator of new wealth in the form of the baby businesses that have given America the lead in cyberspatial exploration and settlement. It is hard to imagine hackers surviving, let alone thriving, in the more formalized and regulated democracies of Europe and Japan. In America, they've become vital for economic growth and trade leadership. Why? Because Americans still celebrate individuality over conformity, reward achievement over consensus and militantly protect the right to be different. But the need to affirm the basic principles of freedom is real. Such an affirmation is needed in part because we are entering new territory, where there are as yet no rules -- just as there were no rules on the American continent in 1620, or in the Northwest Territory in 1787. Centuries later, an affirmation of freedom -- by this document and similar efforts -- is needed for a second reason: We are at the end of a century dominated by the mass institutions of the industrial age. The industrial age encouraged conformity and relied on standardization. And the institutions of the day -- corporate and government bureaucracies, huge civilian and military administrations, schools of all types -- reflected these priorities. Individual liberty suffered -- sometimes only a little, sometimes a lot: o In a Second Wave world, it might make sense for government to insist on the right to peer into every computer by requiring that each contain a special "clipper chip." o In a Second Wave world, it might make sense for government to assume ownership over the broadcast spectrum and demand massive payments from citizens for the right to use it. o In a Second Wave world, it might make sense for government to prohibit entrepreneurs from entering new markets and providing new services. o And, in a Second Wave world, dominated by a few old-fashioned, one-way media "networks," it might even make sense for government to influence which political viewpoints would be carried over the airwaves. All of these interventions might have made sense in a Second Wave world, where standardization dominated and where it was assumed that the scarcity of knowledge (plus a scarcity of telecommunications capacity) made bureaucracies and other elites better able to make decisions than the average person. But, whether they made sense before or not, these and literally thousands of other infringements on individual rights now taken for granted make no sense at all in the Third Wave. For a century, those who lean ideologically in favor of freedom have found themselves at war not only with their ideological opponents, but with a time in history when the value of conformity was at its peak. However desirable as an ideal, individual freedom often seemed impractical. The mass institutions of the Second Wave required us to give up freedom in order for the system to "work." The coming of the Third Wave turns that equation inside-out. The complexity of Third Wave society is too great for any centrally planned bureaucracy to manage. Demassification, customization, individuality, freedom -- these are the keys to success for Third Wave civilization. The Essence of Community If the transition to the Third Wave is so positive, why are we experiencing so much anxiety? Why are the statistics of social decay at or near all-time highs? Why does cyberspatial "rapture" strike millions of prosperous Westerners as lifestyle rupture? Why do the principles that have held us together as a nation seem no longer sufficient -- or even wrong? The incoherence of political life is mirrored in disintegrating personalities. Whether 100% covered by health plans or not, psychotherapists and gurus do a land-office business, as people wander aimlessly amid competing therapies. People slip into cults and covens or, alternatively, into a pathological privatism, convinced that reality is absurd, insane or meaningless. "If things are so good," Forbes magazine asked recently, "why do we feel so bad?" In part, this is why: Because we constitute the final generation of an old civilization and, at the very same time, the first generation of a new one. Much of our personal confusion and social disorientation is traceable to conflict within us and within our political institutions -- between the dying Second Wave civilization and the emergent Third Wave civilization thundering in to take its place. Second Wave ideologues routinely lament the breakup of mass society. Rather than seeing this enriched diversity as an opportunity for human development, they attach it as "fragmentation" and "balkanization." But to reconstitute democracy in Third Wave terms, we need to jettison the frightening but false assumption that more diversity automatically brings more tension and conflict in society. Indeed, the exact reverse can be true: If 100 people all desperately want the same brass ring, they may be forced to fight for it. On the other hand, if each of the 100 has a different objective, it is far more rewarding for them to trade, cooperate, and form symbiotic relationships. Given appropriate social arrangements, diversity can make for a secure and stable civilization. No one knows what the Third Wave communities of the future will look like, or where "demassification" will ultimately lead. It is clear, however, that cyberspace will play an important role knitting together in the diverse communities of tomorrow, facilitating the creation of "electronic neighborhoods" bound together not by geography but by shared interests. Socially, putting advanced computing power in the hands of entire populations will alleviate pressure on highways, reduce air pollution, allow people to live further away from crowded or dangerous urban areas, and expand family time. The late Phil Salin (in Release 1.0 11/25/91) offered this perspective: "[B]y 2000, multiple cyberspaces will have emerged, diverse and increasingly rich. Contrary to naive views, these cyberspaces will not all be the same, and they will not all be open to the general public. The global network is a connected 'platform' for a collection of diverse communities, but only a loose, heterogeneous community itself. Just as access to homes, offices, churches and department stores is controlled by their owners or managers, most virtual locations will exist as distinct places of private property." "But unlike the private property of today," Salin continued, "the potential variations on design and prevailing customs will explode, because many variations can be implemented cheaply in software. And the 'externalities' associated with variations can drop; what happens in one cyberspace can be kept from affecting other cyberspaces." "Cyberspaces" is a wonderful pluralistic word to open more minds to the Third Wave's civilizing potential. Rather than being a centrifugal force helping to tear society apart, cyberspace can be one of the main forms of glue holding together an increasingly free and diverse society. The Role of Government The current Administration has identified the right goal: Reinventing government for the 21st Century. To accomplish that goal is another matter, and for reasons explained in the next and final section, it is not likely to be fully accomplished in the immediate future. This said, it is essential that we understand what it really means to create a Third Wave government and begin the process of transformation. Eventually, the Third Wave will affect virtually everything government does. The most pressing need, however, is to revamp the policies and programs that are slowing the creation of cyberspace. Second Wave programs for Second Wave industries -- the status quo for the status quo -- will do littl f:\12000 essays\technology & computers (295)\Cyberporn On a Screen Near You.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Cybersex. This word brings to mind a barrage of images, which might be on Stra Trek or virtual reality video by aerosmith. Sex is everywhere today--in books, magazines, films, internet, television, and music videos. Something about the combination of sex and computers seems to make children and adults for that matter, a little crazy. In an 18 month study, the team surveyed 917,840 sexually explicit pictures on the internet where pornographic. Trading in explicit imagery is now "one of thelargest recreational applications of users of computer networks." The great majority (71%) of the sexual images on the newsgrouls originate from adult oriented bulletin-board systems (BBS). According to BBS, 98.9% of the consumers of online porn are men. The women hold a 1.1% in chat rooms and on Bulletin boards. Perhaps because hard-core sex pictures are so widely available elsewhere, the adult BBS market seems to be driven by demand for images that can't be found on the average magazine rack, such as pedophilia, hebephilia, and paraphilia. While groups like the Family Research Council insisdt that online child molesters represent a clear and present danger, there is no evidence that it is any greater than that thousands of other threats children face everyday. The Exxon bill proposed to outlaw obscene material and impose fines uo to $100,000and prison term up to two years on anyone who knowinglymakes "indecent" material available to children under the age of 18. Robert Thomas spends his days like any other inmate at the U.S. Medicalcenter for Federal prisoners in Springfield. Thomas,39 a amateur BBS in California, made headlines last year when he and his wife were indicted for transmitting pornographic material to a government agent in Tennessee. This case shows how tight of a squeeze the govenment is putting on internet Freedom. f:\12000 essays\technology & computers (295)\Cyberspace Freedom.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Being one of millions of surfers throughout the Internet, I see that fundamental civil liberties are as important in cyberspace as they are in traditional contexts. Cyberspace defined in Webster's Tenth Edition dictionary is the on-line worlds of networks. The right to speak and publish using a virtual pen has its roots in a long tradition dating back to the very founding of democracy in this country. With the passage of the 1996 Telecommunications Act, Congress has prepared to turn the Internet from one of the greatest resources of cultural, social, and scientific information into the online equivalent of a children's reading room. By invoking the overboard and vague term "indecent" as the standard by which electronic communication should be censored, Congress has insured that information providers seeking to avoid criminal prosecution will close the gates on anything but the most tame information and discussions. The Communications Decency Act calls for two years of jail time for anyone caught using "indecent" language over the net; as if reading profanities online affects us more dramatically than reading them on paper. Our First Amendment states, "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof, or abridging the freedom of speech, or of the press...." The Act takes away this right. The Constitution-defying traitors creating these useless laws do not they understand the medium they're trying to control. What they "claim" is that they are trying to protect our children from moral threatening content. This "protect our helpless children" ideology is bogus. If more government officials were more knowledgeable about online information they would realize the huge flaw the Communication Decency Act contains. We don't need the government to patrol fruitlessly on the Internet when parents can simply install software like Net Nanny or Surf Watch. These programs block all "sensitive" material from entering one's modem line. What's more, legislators have already passed effective laws against obscenity and child pornography. We don't need a redundant Act to accomplish what has already been written. Over 17 million Web pages float throughout cyberspace. Never before has information been so instant, and so global. And never before has our government been so spooked by the potential power "little people" have at their fingertips. The ability for anyone to send pictures and words cheaply and quickly to potentially millions of others seems to terrify the government and control freaks. Thus, the Communications Decency Act destroys our own constitution rights and insults the dreams of Jefferson, Washington, Madison, Mill, Brandeis, and DeToqueville. It's funny, now that we finally have a medium that truly allows us to exercise our First Amendment right, the government is trying to censor it. Forget them! Continue to engage in free speech on the net. It's the only way to win the battle. David Hembree October 23, 1996 Dr. Willis f:\12000 essays\technology & computers (295)\Cyberspace in Perspective.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1. If a survey were being done on how people experience cyberspace, one would immediately notice that no two answers would be the same. Experiencing cyberspace is something that is different for every individual. I myself experience cyberspace psychologically, I experience it in my mind. There have been many attempts at trying to define the abstruse term, but up to date, no one has pinned the tail on the donkey. There cannot be one solid definition for a word that possesses so many meanings. I personally associate the word cyberspace with the idea of being able to travel to distant places without ever leaving my chair. Obviously, I know that there is no possible way of visiting different places or countries via my home computer, but in my mind, when I see the location that I am connected to, it feel as though a part of me is there. The best part is that I can switch from scenario to scenario without having to travel any ground. I do not feel a sense of distance or location, except when it takes a prolonged amount of time to connect to a host. When I travel from place to place (site to site), I do not cover any known physical distances, but instead I cover visual distance. Just as many people do, I refer to the places that I visit as virtual worlds. I like calling them this because I never actually get to see the reality of the "world". I only get to see it electronically and digitally. The feeling that I experience while in cyberspace is knowing that I possess the power to visit any where I want. When I click one of the buttons on the mouse, or what I refer to as a transporter, I feel as though all the power in the world rests at the end of my fingertips. I am in my own sort of fantasy land. Once I land in a desired location, or website, I have the opportunity to click on pictures and words that take me to new worlds. These pictures and words have the power to make my virtual tour even more pleasing by introducing me to new and exciting things. People have referred to experiences in cyberspace, experiences such as mine, as a basic extension of the mind. I definitely agree with this statement. I believe that it takes imagination and creativity to experience all of the things that cyberspace has to offer. With all the colors, strange text and mind-boggling graphics, cyberspace is something that everyone must experience on their own. No two people experience it in the same way and it takes practice to learn different ways of experiencing all that it has to offer. I guess everyone must find their own little 'cyber-niche'. 2. In today's technological oriented society, it is difficult for one to go about their daily lives without interacting in some form or another with digital components. Communication is the perfect example of how people interact with digital technology. Talking to loved ones who live on the other side of the globe, faxing a friend, or simply calling in sick to work, are all forms of communication, but these examples are taken for granted. A popular form of digital communication, whether people realize it or not, is the cellular phone. Cellular phones have become very popular toys over the past few years and they are 100% digital. For people who are constantly on the go, the "Cell Phone" is a convenient digital advancement. I find the cellular phone to be of much help in the stickier situations; like when I am being forced to change a flat tire in -20o weather, when there's heavy traffic, when I want to find a quicker route to where I'm going, or when I get lost in an unfamiliar region. They are relatively expensive to use, but in most cases, I would say that they are well worth their price. I don't have five hour conversations using them, but I can let people know what I want to tell them in a short amount of time, which I find extremely handy. Before the ease of world-wide portable phones, there was a different breed of digital communication devices: beepers. Beepers, or pagers as they are commonly called, go hand in hand with cell phones. If a person does not own a cellular phone, a pager is a great alternative. It is a piece of digital equipment that allows the carrier to be notified when someone is trying to get in touch with them. I find that pagers are better in the sense that they cost less than cell phones do. I was given a pager years ago, and I still use the same one today. It's much easier to answer a pager than a cell phone when I am driving. Even though they are not as great of a communication tool as a cellular phone, I would probably never give mine up. I also communicate with the pager because I read it's LCD display when I am interested in finding out who is disturbing me. Still on topic of communication, I cannot forget to mention television. It is a huge form of communication today, more so than ever before. Although televisions have changed dramatically in a short amount of time, TVs communicate several different messages. Everyone watches television at some point in time, and when they do, they are most likely interacting with a digital TV. I watch television for relaxation, or to keep up to date on world events. By simply changing the channels, or more complex things such as programming channels/time/date, I am interacting with the television. Television is visually pleasing. My concentration becomes affixed to the point that I have no idea, or very little idea, as to what is going on around me. VCRs go hand in hand with the television. As I do with the TV, I have to give the VCR commands as well. Whether I tell it to PLAY, STOP or REWIND, I am interacting with it. As with the television, I am fixating all of my attention to what it is showing, or playing. Going back to raw communication, a real piece of digital equipment that I have found to be handy is the fax machine. Fax machines transmit messages back and forth from one party to another. Fax machines are not as accessible as TVs or VCRs, but those that use them get the chance to interact with digital technology. I have found this type of digital equipment extremely useful when looking for a job. I have, in the past, faxed resumes to different employers who were looking for workers. I have also used them on many other occasions, but getting access to one is relatively difficult with out spending money to send copies. Another type of digital component that I interact with is my computer. I use it on a daily basis to type out assignments, e-mail, and to gather information off of the Internet. Also, being a student at Trent University, I have a student card which digitally allows me to take out books, eat, and get in and out of special events all by simply swiping it through a scanner. Digital pocket agendas have become quite common for students. I use one almost every time I use the phone. It contains several phone numbers, important dates, and many reminders that make organization much easier. There are many other digital components that I interact with, but I feel that the ones I have mentioned are the most prevalent in my life. 3. Just as there are many forms of digital components that I interact with, there are many analog components as well. The most popular analog component that I use is the telephone. Some use fiberoptic lines, but I will refer to the older, more traditional phones. There are many reasons why I use the phone such as keeping in touch with family and friends, calling my boss at work, or ordering a pizza. In today's hi-tech environment, I can even call JoJo to find out what tomorrow will bring. For me, the telephone is the easiest way to get in touch with people. I find that it is much easier to express feelings over the phone for the simple fact that I do not have to be vis-à-vis with the person. Being face to face is sometimes much too personal, and I am far more interested in telling the boss that I cannot come into work without him having to see that I'm not actually sick. Therefore, it is clear that there are many reasons for using a telephone, and that these examples do not only apply to myself, but to many others who use a telephone. Some feel that because we are in such a hi-tech world, they have to purchase the new high priced advancements. For example, buying a high quality digital watch. I think this to be very unnecessary. I own an old fashioned analog watch. The good old fashioned glow in the dark dial works best for me. I think it's just a status symbol to have the best watch, but then again, I've never seen a 24K gold digital watch. Also, when I am not listening to a CD, I still use cassette tapes. For me there is not much difference in the sound, but I can tell a difference when I want to cue (fast forward or rewind) a song. Obviously, CDs are the 'hip thing' of the 90's, but I find nothing is wrong with a cassette. There are many forms of music that I listen to that are not digital. Many people still have tape players in their vehicles, which goes to show that I am not the only person who still does not mind analog. 4. When trying to figure out how far apart two phones are from each other, many forms of calculation come into play. It is easy to estimate that your phone is approximately thirty feet from your neighbors, but that is not very accurate, there are ways to measure the exact distance between two phones. When the police send out a signal that bounces off a moving object and then back to their radar gun, a computer program measures how long it took for the signal to come back and thus executes the calculations. A programmer could easily write a loop program that sends a signal to a far away computer (modem) and then wait for the signal to come back. This loop would allow the computer to figure out how long it took for the signal to reach it's destination, and come back, thus telling the person how far apart the two phones are from each other. By looking at two phone numbers, one can tell by their dimensions how far apart they may be. For example, if two phones were being compared and one of the phone numbers was (519) 498-0872 and the other phone number was 011-356-7951, one might take a wild guess that these numbers are far from each other. Long distance calls, for some reason, seem to sound a little different than close connection calls. When one notices that the person on the other lines voice sounds faded or with static, they can guess that the call is coming from far away. One could get extremely technical and measure the resistance between two phones, but that is difficult for the average person to do. The easiest ways to tell how far apart two phones are is to estimate. People can estimate this by: voice quality, length of phone number, or by running a loop program. f:\12000 essays\technology & computers (295)\CYBERSPACE.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ As described by Wiliam Gibson in his science fiction novel Neuromancer, cyberspace was a "Consensual hallucination that felt and looked like a physical space but actuallly was a computer-generated construct representing abstract data." Years later, mankind has realized that Gibson's vision is very close to reality. The term cyberspace was frequently used to explain or describe the process in which two computers connect with each other through various telephone lines. In this communication between the two systems there seems to be no distance between them. There are now four catagories that describe the major components of todays cyber space. One oof those is commercial on-line services. These large computer systems can host thousands of users simultaneously. When a computer user purchases an account from the company they recieve a screen name and a password. The user then can use his or her screen name and password to log on and use the system. Most of the online systems have chat rooms where users can chat in real time with one another. some users even think of on-line services as a community. The second catagory involves Bulletin Boards or (BBS's). These services allow the user accounts like their larger on-line service cousins. These BBS's have less users because they run on smaller computers. The system operators, more commonly known as sysops, are running the boards. Since most BBS's are hobbies there is usually no charge for an account. The same as on-line services, users use BBS's for trades, games, and to chat among other users. Since bulletin boeard are so easy to set up there are thousands of them located around the world. Each board has a theme. These themes range from astronomy to racist neo-nazi crap. A boards theme helps users in their search for a board that will satisfy their personal preference. A third catagory is the Private System. These private systems sometime run bulletin boards privately, not letting the public acess. In these private systems users can perform specialized computer operations, or access to data. Through this private network users within a company can send mail, faxes, and other messages to each other through the companies computer network. If a worker was to look up a customers information he could access it through the companies private network. The public can not get access to the companies private system unless he or she knows the systems password. The fourth and last catagory is computer networks. These collections are a group of connected computers that exchange information. One of the most well known is the internet. The internet is the so called "network of networks." Through the internet a user can transfer files to and from systems. The program that allows this is called (FTP) File Transfer Protocol. This program allows users to send anything from faxes to software from one to another. The progam is taken from one computer and sent across the phone lines to the recieving computer which compiles the information. In cyberspace their are a number of tools a user can use. E-mail is a popular tool which allows the transfer of electronic mail between users. This mail more convenient than postage mail because it travels over phone lines. Software exchange is also a popular tool. Some systems sell software while other times it comes free of charge. The FTP program is the reason for the speediness between transfers. Games and entertainment are another resource. A user on-line can play a game against someone who is hundreds of miles away. It is now possible to go shopping from the privacy of your own computer not even having to leave your home. The chat rooms that are mostly found on-line allow users to communicate with a variety of people together in a virtual room. Sometimes services will allow guest speakers to have access to the rooms so multiple users can ask questions. One popular resourse is education. A user can find endless amounts of information either on the internet, on-line services, chat rooms, or even personal computer software. The internet is bigger than any library and it is possible to find any type of information needed. As described by Wiliam Gibson in his science fiction novel Neuromancer, cyberspace was a "Consensual hallucination that felt and looked like a physical space but actuallly was a computer-generated construct representing abstract data." Years later, mankind has realized that Gibson's vision is very close to reality. The term cyberspace was frequently used to explain or describe the process in which two computers connect with each other through various telephone lines. In this communication between the two systems there seems to be no distance between them. There are now four catagories that describe the major components of todays cyber space. One oof those is commercial on-line services. These large computer systems can host thousands of users simultaneously. When a computer user purchases an account from the company they recieve a screen name and a password. The user then can use his or her screen name and password to log on and use the system. Most of the online systems have chat rooms where users can chat in real time with one another. some users even think of on-line services as a community. The second catagory involves Bulletin Boards or (BBS's). These services allow the user accounts like their larger on-line service cousins. These BBS's have less users because they run on smaller computers. The system operators, more commonly known as sysops, are running the boards. Since most BBS's are hobbies there is usually no charge for an account. The same as on-line services, users use BBS's for trades, games, and to chat among other users. Since bulletin boeard are so easy to set up there are thousands of them located around the world. Each board has a theme. These themes range from astronomy to racist neo-nazi crap. A boards theme helps users in their search for a board that will satisfy their personal preference. A third catagory is the Private System. These private systems sometime run bulletin boards privately, not letting the public acess. In these private systems users can perform specialized computer operations, or access to data. Through this private network users within a company can send mail, faxes, and other messages to each other through the companies computer network. If a worker was to look up a customers information he could access it through the companies private network. The public can not get access to the companies private system unless he or she knows the systems password. The fourth and last catagory is computer networks. These collections are a group of connected computers that exchange information. One of the most well known is the internet. The internet is the so called "network of networks." Through the internet a user can transfer files to and from systems. The program that allows this is called (FTP) File Transfer Protocol. This program allows users to send anything from faxes to software from one to another. The progam is taken from one computer and sent across the phone lines to the recieving computer which compiles the information. In cyberspace their are a number of tools a user can use. E-mail is a popular tool which allows the transfer of electronic mail between users. This mail more convenient than postage mail because it travels over phone lines. Software exchange is also a popular tool. Some systems sell software while other times it comes free of charge. The FTP program is the reason for the speediness between transfers. Games and entertainment are another resource. A user on-line can play a game against someone who is hundreds of miles away. It is now possible to go shopping from the privacy of your own computer not even having to leave your home. The chat rooms that are mostly found on-line allow users to communicate with a variety of people together in a virtual room. Sometimes services will allow guest speakers to have access to the rooms so multiple users can ask questions. One popular resourse is education. A user can find endless amounts of information either on the internet, on-line services, chat rooms, or even personal computer software. The internet is bigger than any library and it is possible to find any type of information needed. As described by Wiliam Gibson in his science fiction novel Neuromancer, cyberspace was a "Consensual hallucination that felt and looked like a physical space but actuallly was a computer-generated construct representing abstract data." Years later, mankind has realized that Gibson's vision is very close to reality. The term cyberspace was frequently used to explain or describe the process in which two computers connect with each other through various telephone lines. In this communication between the two systems there seems to be no distance between them. There are now four catagories that describe the major components of todays cyber space. One oof those is commercial on-line services. These large computer systems can host thousands of users simultaneously. When a computer user purchases an account from the company they recieve a screen name and a password. The user then can use his or her screen name and password to log on and use the system. Most of the online systems have chat rooms where users can chat in real time with one another. some users even think of on-line services as a community. The second catagory involves Bulletin Boards or (BBS's). These services allow the user accounts like their larger on-line service cousins. These BBS's have less users because they run on smaller computers. The system operators, more commonly known as sysops, are running the boards. Since most BBS's are hobbies there is usually no charge for an account. The same as on-line services, users use BBS's for trades, games, and to chat among other users. Since bulletin boeard are so easy to set up there are thousands of them located around the world. Each board has a theme. These themes range from astronomy to racist neo-nazi crap. A boards theme helps users in their search for a board that will satisfy their personal preference. A third catagory is the Private System. These private systems sometime run bulletin boards privately, not letting the public acess. In these private systems users can perform specialized computer operations, or access to data. Through this private network users within a company can send mail, faxes, a f:\12000 essays\technology & computers (295)\Datorbrott.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Hacking och andra datorbrott av Silvio Usilla Fredrik Larsson Patrik Bäckström & Anders Westerberg Ht 92 Innehċll Sammanfattning 1 Hacking 2 Hackerns historia Hur det började Sen kom datorn Begreppet hacker ändrar betydelse, igen Den typiske hackern Den moderne hackern 3 Datavirus Datorbrott i Sverige 5 Säkerhet 6 Sekretess problem 8 Auktorisation Lösenordsrutiner 9 Diskussionssidan I Källor II Datorbrott, sammanfattning. Vi har behandlat ämnet hacking, databrott och olaga dataintrċng. Om ämnet hacking har vi behandlat historien och vad en hacker verkligen gör. Det finns ju sċ mycket rykten om hackers. Databrotten har vi behandlat som brott som indirekt är ihopkopplade med datorer. Vi har även skrivit om olaga dataintrċng. Hacking började med telefonen. Det var ofta blinda ungdomar som började söka sig ut i världen via telefonen. Sedan ändrades betydelsen när datorerna kom. Dċ var det dator-intresserade ungdomar som började kallas hackers. Massmedian beskriver hackers som ligister som inte gör annat än att bryta sig in i datasystem för att sabotera, detta stämmer inte. Det är crackers som ringer för att sabotera. En hacker fċr titeln av en annan hacker, annars är den inget värd. Hackers arbetar ofta tillsammans och hjälper varandra med problem, men är samtidigt försiktiga med vem som fċr informationen sċ att ingen novis missbrukar den. Massmedia mċlar ofta upp bilder som säger att hackers är allvarliga brottslingar, men sċ är oftast inte fallet. Hackers bryter sig nästan uteslutande in i system för att lära sig ett system och skaffa sig kunskap om dess funktioner, inte för att förstöra eller stjäla information. Däremot är en cracker mer destruktiv. Man skulle kunna säga att en cracker är en hacker som är destruktivt inriktad. Datavirus, vad är det? Vad gör man ċt det? Ett virus är ett program som är programmerat för att förstöra. Enda skyddet är att ha en försökskanin som man inplanterar viruset pċ. Datorbrott i Sverige är inte sċ vanligt som man kan tro. Det visar faktiskt ingen tendens pċ att öka heller. Detta visar en undersökning som civildepartimenetet har gjort runt om i landet pċ olika företag. Civildepartimenetet frċgade bl.a. om de kände sig oroade inför framtiden, men de svarade att de inte var det. De hade inte heller rċkat ut för nċgot allvarligare datorbrott tidigare heller, sċ sċg framtiden an med tillförsikt. Det är av största vikt att register och dylikt skyddas frċn obehöriga, det kan nämligen ha fruktansvärda konsekvenser för landets säkerhet men ocksċ för livet pċ det personligaplanet. Det finns flera sätt att titta pċ information pċ dataskärmar. Ett sätt är RöS (röjande singnaler) , man kan med hjälp av en antenn och en monitor se vad för information som kommer upp pċ en dataskärm utan att befinna sig i rummet,detta är pċ det mer sofistikerade planet. För att skydda sig vid överföring av data kan man använda sig av kryptering (dold text) , man bör undvika att göra egna krypterings- programm. Det vanligaste är att man mċste uppge ett lösen vid uppkoppling med ett datasystem. Slutsatsen är skydda dig efter informationens vikt och till en kostnad du anser rimlig, för det finns inget sċdant som ett 100% skydd. Vad är en hacker? Hur det började. De första personerna som kallades för hackers var ett antal personer pċ 1960-talet, spridda över hela världen, som lärde sig manipulera telefon nätet. De flesta av dessa var ungdomar, som ofta var blinda eller ensamma, ville ha kontakt med andra personer. Det började med att en blind kille i 10ċrs ċldern roade sig med att ringa med telefonen. Det var det roligaste han visste. Men han hade inte bara intresse för telefoner. Han hade även ett suveränt musik sinne. Av en slump kom han pċ att om han visslade en viss ton sċ kunde han fċ telefonen att kopplas till nċgot annat nummer... Pċ sċ sätt lärde han sig att vidarekoppla telefoner frċn sin telefon utan att behöva betala för det. Pċ detta sätt kom han i kontakt med andra personer med samma intresse pċ alla möjliga ställen pċ jorden. Han lärde dem samma saker som han hade kommit pċ, och det började tillverkas lċdor som kunde generera de toner som behövdes för att koppla om telefonerna. Dessa kallades Blċ Lċdor eller "Blue Boxes". En del företag började till och med tillverka dessa lċdor, gissa om de tjänade pengar pċ det! Sen kom datorn. En bit in pċ 70-talet sċ började hemdatorerna komma sċ smċtt. Dċ ändrade ordet hacker mening till att betyda ungefär "En person som arbetar, sover och lever med datorer". Det fanns ett antal ungdomar i USA som levde pċ detta sätt. Det sprċket de talade förstod bara Hackers. En vanlig person kunde stċ som ett frċgetecken. Dessa ungdomar har betytt mycket i datorernas utveckling. Speciellt i begreppet time sharing eller multitasking som det ocksċ kallas. Det innebär att mer än en person kan använda samma dator samtidigt. Begreppet hacker ändrar betydelse, igen. I slutet av 70-talet började man använda telefon nätet för att koppla ihop datorer som kunde befinna sig pċ helt annan ort eller land. I samband med detta började även privat personer och företag öppna databaser som man kunde ringa till med ett s.k. modem (MOdulator/DEModulator) som är en 'apparat' som omvandlar de digitala signalerna frċn datorn till analoga som kan överföras över telefonnätet. Pċ dessa databaser kunde man fċ information av alla de slag, byta erfarenheter med andra datorintresserade m.m. Pċ dessa databaser växte det upp grupper av personer som hade som största intresse att försöka ta sig in i system som man inte fċr komma in i med hjälp av ett modem och en terminal. Dessa personer började nu kallas för hackers, och det gör de än idag. Den typiske hackern. Den typiske hackern är oftast : * Man * 15-25ċr * Inte tidigare straffad. * Ovanligt intelligent och envis. * Har endast Datorer som intresse. * Tycker skolan är för enkel och trċkig. De flesta hackers gör inget direkt brottsligt mer än just intrċnget. De gör det endast för att skaffa kunskap om olika system. Det är deras största intresse, skaffa sċ mycket kunskap som möjligt om allt som har med datorer att göra. Ofta kan de sitta uppe i flera dygn för att komma in i ett system. Den moderne hackern. "Hacker är en hederstitel. Den är värd nċgot först när man fċr den av andra" citerar Jörgen Nissen en hacker i Ny Teknik 1990 nr49. En hacker, om han nu förtjänar titeln, är oftast medlem i en grupp med andra hackers, inom denna grupp sysslar hackern med att underhċlla utrustning , programera och förfina vissa programvaror. De har möten med andra hacker grupper där de umgċs och diskuterar lite av varje. Ryktet som pċstċr att de skulle hċlla pċ att kopiera programvara till varandra pċ dessa möten är nog inte helt falskt, men eftersom det inte är olagligt sċ är ju saken inte sċ aktuell, missupfatta mig inte nu, jag menade inte att det är inaktuellt att kopiera program utan att diskussionen inte är aktuell. Datorerna har nu helt plötsligt startat ett nytt ämne där det är ungdomarna som ofta överträffar lärarna och nästan alltid föräldrarna. Detta har gjort att en hel del myter flutit upp i hacker kretsarna, t.ex. den om den unga vaktmästaren som rċkar passera ett gäng datorexperter som inte klarar av ett visst datorproblem, vaktmästaren som är en ung hacker knappar in nċgra koder och problemet löser sig. Detta är oftast bara pċhitt men en gnutta sanning finns det nog allt med. Mċnga hackers extraknäcker pċ olika datorföretag, en del installerar datorer andra anpassar datorsystem efter användaren och det finns t.o.m. dem som löser problem över telefon. När massmedian tar upp ämnet hacking handlar det nästan uteslutande om olagligheter. Hackers som tagit sig in i ett företags datorsystem och saboterat, eller nästlat sig in i nċgon bankdator och roffat ċt sig pengar pċ det ena eller andra sättet. Men att kalla detta för hacking är fel. Den rätta benämningen skall vara cracking, en hacker behöver inte nödvändigtvis vara en cracker men en cracker är nästan uteslutande en hacker. En cracker sysslar i princip med allt mellan att bryta kopieringsskydd pċ spelprogram till avancerade brott och rena sabotage. Om en hacker nu bryter sig in i ett datorsystem sċ är det inte för att ställa till oreda utan mer att bevisa sin kompetens inför kamraterna och sig själv. Att programmera datorvirus är mycket ovanligt. Det är nämligen emot hackerns hederskodex, och det var ju bra att jag fick in lite om virus där. Dċ mċste jag ju förklara det. Man har ju ofta hört talas om virus som ställer oreda, men vad är ett virus? Datavirus Ett datavirus är ett datorprogram som har i uppgift att förstöra information för andra. Det kan gömma sig i andra oskyldiga program eller bara vara ett program som man tror är ofarligt, sċkallade Trojanska Hästar. I vissa fall kan viruset bara ligga och vänta pċ en viss tidpunkt innan det helt oväntat utan nċgon som helst förvarning träder i aktion och börjar sabotera, som i fallet med Fredagen den trettonde viruset som bara träder i aktion pċ fredagen den trettonde. Förr spreds virus mest genom bärbara lagringsenheter, typ flexskivor, magnetband m.m. Men nu när datakommunikationen kommit igċng är det överföring via modem (ett slags telefon till datorn) som har blivit det vanligaste sättet. Nu kan en sċkallad cracker , observera inte en hacker, bryta sig in i ett datorsystem och släppa lös sina virus. Om den nu smittade datorn är ihopkopplad med en massa andra datorer , ett sċkallat nätverk, sċ kan man räkna med att de ocksċ kommer att bli smittade. Om det nu är ett privat företag som fċtt smittan och viruset förstör viktig information för dem sċ kan det bli miljonförluster. Men om det är ett statligt ägt företag som har hand om tex personuppgifter och dylikt kan följderna bli katastrofala. Men sċvitt man vet har det bara varit persondatorerna som blivit mest utsatta. Det finns vaccinations program mot virus men de är inte sċ pċlitliga eftersom olika virus har olika karaktär sċ det är nästan omöjligt att fċ vaccinations programmet att känna igen alla, speciellt som det kommer nya hela tiden. För att man skall bli riktigt säker pċ att ett program är helt virusfritt sċ borde man först undersöka det med olika vaccinations program och sedan testköra programmet pċ en isolerad dator ett tag. Händer det ingenting efter ungefär ett ċr kan man vara ganska säker pċ att programmet är virusfritt. Men vem orkar göra allt detta? Dessutom är programmet ganska ċlderdomligt nu ocksċ. Hittils sċ har det inte varit nċgra allvarligare skador pċ datorsystem eftersom virusen man hittat varit ganska harmlösa, men bara att de finns där borde vara en varning ċt oss alla. Ingen gċr säker. Nu när jag diskuterat lite om virus och hacking sċ är det väl dags att vi fċr smaka lite pċ datorbrotten... Datorbrott i Sverige Civildepartimentet har gjort en undersökning i Sverige om hur mycket företag drabbas av datorbrott. Det visade sig att i själva verket ingen kände sig drabbad av allvarliga datorbrott. Stora kupper som SPP/VPC-kuppen, där 53 miljoner kronor förskingrades, har aldrig hänt igen. Rapporter frċn massmedia har varit missvisande dċ de behandlat dator-relaterade bedrägerer och dylikt. När man undersökt de olika fallen pċ djupet visar det sig att de inte ens kommer under det renodlade begreppet datorbrott. Slutsatsen är sċledes att man inte ens kan pċvisa nċgon förekomst överhuvudtaget. De vanligaste datorbrotten lär vara förskingringar och bedrägerier.Detta är faktiskt vanlig traditionell brottslighet med den enda skillnaden att datorteknik använts. Detta kallas datorrelaterad brottslighet. Den förekommer naturligtvis men i begränsad omfattning. Det vanligaste är att en person vid t.ex. en bank eller ett postkontor eller försäkrings- kassan, som har tillgċng till en terminal, faller för frestelsen att göra olagliga transaktioner. Detta är som sagt ovanligt och har marginell ekonomisk betydelse, enligt vissa källor. Glädjande nog ser det inte heller ut som att den skulle öka i nċgon nämnvärd omfattning. En del fall, där gärningsmannen fällts, har det avslöjats att brottet har pċgċtt i flera ċr, ibland 4-5 ċr. De intervjuade källorna visar inget uttryck för oro även om mörkertalet kan vara relativt högt. Att upptäcka de olika fallen av kriminalitet varierar. I vissa fall blir de upptäckte med hjälp av rutinmässiga internkontroller, men allra vanligast är det att slumpen lägger käppar i hjulen för gärningsmannen. Det kan vara att en arbetskamrat rċkar upptäcka en liten ovanlighet och därmed är karusellen igċng. Ett bra exempel pċ vad den otroliga slumpen kan ċstadkomma är förskingringen pċ Union Dime, visserligen inte i Sverige men endċ ett bra exempel. En man, som vi kan kalla Bruce Banner, arbetade som kamrer vid sparbanken Union Dime. Han kände sig orättvist behandlad av företagsledningen och detta tjänade som motivation för honom. Han började manipulera huvuddatorn sċ att den skrev ut regelbundna rapporter om att alla konton var i sin ordning, fastän de i själva verket inte var det. Det Banner egentligen gjorde var att först förfalska bankens register, och sedan flyttade han skensummor av pengar hit och dit i registret pċ ett sċdant sätt att det hela var i princip omöjligt att upptäcka. Han tog i första hand sċdana konton som innehöll mycket pengar och där det gjordes transaktioner sällan. Banken hade ett sċdant system att de som ville göra transaktioner var tvungna att komma med sina bankböcker till banken sċ att det kunde bokföras bċde i bankboken och i datasystemet. Efter bankens stängning studerade han dagens transaktioner och prickade av de konton som hade stora behċllningar. Om han nu sċg ett konto pċ t.ex. 100 000 dollar gick han till sin terminal, som han hade rätt till, och gjorde en s.k. chefsannulering - en korrigering. Där beordrade han datorn att ändra summan till 50 000 dollar. Sedan öppnade han kassavalvet och tog med sig 50 000 i kontanter. Om nu denne person, som ägde kontot, inte gjorde nċgot större uttag skulle detta aldrig upptäckas. Och mycket riktigt, han ċkte inte fast förrän tre ċr senare. Men han ċkte inte fast p.g.a. nċgon arbetskamrat eller nċgot annat sċdant. Nu var det sċ att Banner rċkade vara en storspelare som satsade pengar pċ hästar. Detta spelande hade han börjat finansiera med de stulna pengarna pċ en lokal illegal spelhċla. Där gjorde sedan polisen razzia och fann Banners namn. Men anledningen till att polisen spċrade upp honom var att de hade sett att han hade satsat upp till 30 000 dollar pċ en enda dag. De ville veta var han fick tag pċ pengarna. Han erkände sedan sitt handlande och sade att han ċngrade sig. Banner fick tjugo mċnader, ett lċgt straff, men släpptes efter femton p.g.a. gott uppförande. Hur kunde han dċ klara av det hela. Ja, det han behövde var lite snabbtänkthet och de svagheter i systemet som han lärt känna genom erfarenhet. Nċgot geni i programmering eller nċgot liknande var han inte. Det enda han hade var den den utbildning han fċtt för att utföra sitt vanliga arbete. Andra händelser är när skolungdomar använt annans lösenord i video- tex systemet, dock utan att förorsaka nċgon större skada. Källor säger att sċdan brottslighet i Sverige inte nu utgör nċgot problem. All datorbrottslighet rör sig ju självklart inte om pengasvindlande. Det finns mer, mycket mer. Nu skall jag ta upp lite om säkerheten, vad som är värt att skydda, hur man skall skydda sig och dylikt. Säkerhet Det finns mycket som är värt att skydda t.ex. diverse register med blandat innehċll. Säg att försvarets register över beredskapsskjul där vapen ,konserver ,gasmasker och diverse läkemedel förvaras kom ut. Säg ocksċ att det kom ut var försvaret har sina dolda baser. Om detta kom ut till ,skall vi säga Norge kunde konsekvenserna bli förödande om dom fick för sig att starta krig. Nu är väl detta föga troligt men säg istället att VAM kom över registren, dċ skulle de kunna försöka ta död pċ olika ofentliga personer. Om de även kunde komma över vaktbolages register om larm m.m. Dċ kunde dom utnyttja dessa för att begċ ytterligare brott. Som du ser skulle konsekvenserna kunna bli förödande om bara dessa tvċ kanske tre register kom i fel händer. Betänk dċ att det finns tusentals liknande register kanske inte av lika stor betydelse för nationens säkerhet, men om försäkringskassan eller socialbyrċns register kom ut kunde detta fċ fruktansvärda följder pċ en arbetsplats. Betänk om du fick reda pċ att nċgon pċ din arbets plats var straffad för att ha vċldfört sig pċ barn, hur skulle du reagera pċ detta? Eller att nċgon i din bekantskaps krets hade diverse smittsamma sjukdomar som han/hon ċtdragit sig under en resa i ,skall vi säga Thailand. Det är alltsċ av yttersta vikt att sċdana register inte blir offentliga. Register av detta slag skyddas genom sekretesslagen hur mycket den nu hjälper mot hacking/cracking (man gör ett olaga intrċng genom att forcera diverse lösen och andra säkerhetsspärrar) .Men det hjälper inte att göra en massa avancerade säkerhets program när fenomenet RöS (röjande singnaler ) existerar .Det var inte länge sedan man började tänka pċ RöS även i civila sammanhang . RöS kann utgöras av diverse saker bl.a. ljud ,elektromagnetiska signaler ,videosignaler ,radiosignaler samt överlagrade signaler som leds ut i kraftnätetet. Datorer ,terminaler och även elektriska skrivmaskinner avger RöS. RöS är inte särskilt svċrt att fċnga upp i etern det gċr att göra med en vanlig tv, oftast dċ portabel samt en relativt enkel antenn. En elektrisk skrivmaskin är oftast inte sċ intressant men en dator där utkast till diverse hemliga dokument skrivs är klart mycket intresantare Normalt används inte detta sätt av den ordinäre hackern, detta används främst för industri spionage och för att komma över militära hemligheter . Hur skyddar man sig dċ mot RöS? Skydd gċr att ċstakomma genom att bygga femton meter tjocka betongväggar runt data utrustningen men det finns nċgot enklare metoder som t.ex skärmning , olika typer av avstörningsfilter och ljudisolering men detta är alla ganska klumpiga lösningar , forskning bedrivs intensivt och mċlet är att kunna bygga helt avstörd datautrustning till ett rimligt pris. Det förekommer även buggning (man tappar av informationen pċ telefonnätet) kanske ċterigen inte för den vanlige hackern men det förekommer. Buggning är förbjudet enligt svensk lag, men bċde televerket och polisen vet att det förekommer . Det enklaste sättet att skydda sig mot denna avtappning av information är att kryptera särskilt viktig information . Det vanligaste är att man förvränger texten enligt ett vist förutbestämt mönster mellan sändaren och mottagaren, man bör dock passa sig för att skapa egna kodsystem dċ det kan bli allt för enkelt att hitta mönstret. Det finns även helt slumpmässiga program där datorn kodar medelandet men det förutsätter att mottagaren har samma program. Hur detta fungerrar i detalj vet vi inte, men det kan verka fullständigt ologiskt för den vanliga lekmannen. Det finns mċnga olika sätt att att ta sig in i datasystem samt skydda sig mot dom som försöker ta sig in . Ovan nämda exempel är bara en brċkdel av dom. Jag skrev nċ´n mening om säkerhets program där uppe, det leder ju mig in pċ nästa ämne. Sċ lägligt, dċ. Sekretess problem När ett brott begċs eller förbereds sċ är det vanligt att brottslingen, som oftast jobbar pċ företaget alltsċ ett sċkallat "Inside job", lägger in vissa koder i ett program som används dagligen. När brottslingen sen kommer hem eller till en annan lämplig terminal sċ kan han ringa upp, köra programmet, kanske skriva in nċ´t kommando och ,vips, sċ är han ute ur sekretess problemen. Det är därför viktigt att man har sċkallade "Check Ups". Dessa utförs antingen manuellt eller automatiskt. Inom vissa speciella system finns det program som hċller pċ att kolla upp varandra hela tiden. Detta är väldigt sofistikerade program och används vanligtvis inte av andra system. Men hursomhelst sċ är detta iallafall en version av ett automatiskt "Check up" program som kollar om nċgon manipulerat med andra program. En manuell "Check up" utförs av en anställd som fċr gċ igenom programmet bit för bit. Detta är antagligen det säkraste sättet men ocksċ det dyraste och mest tidskrävande varför automatiska program oftast används. Men det finns andra sätt att bryta sig in pċ. För att göra det lite svċrare för brottslingen sċ har man speciella programpaket som frċgar efter namn och lösenord. Detta kan ju klart knäckas, men bara om brottslingen vet nċgot namn som används och ett lösenord som passar till namnet. Som det framgċr av RöS sċ är detta fullt möjligt. Men om brottslingen bara har ett litet vagt hum om vad som är rätt , jag kommer fram till hur senare, sċ kan han ju pröva sig fram. Men alla som försöker komma in i datorsystemet loggförs, t.o.m. om man skrivit in fel lösenord sċ skrivs detta ner i en speciell fil. Detta medför att om brottslingen nu inte kom in med sitt namn och lösen som han klurat fram nċgonstans sċ finns de namn och lösenord han försökte med ändċ loggförda. Alltsċ kan man ju kolla upp var läckan finns, med läcka menar jag antingen slarv eller att nċgon säljer information. Dessa loggfilerna kan bara läsas av en person med en viss status och är inte till för vilken personal som helst. Detta för oss genast till ämnet auktorisation. Auktorisation När en person skrivit in sitt namn och lösen för att komma in i systemet medför inte detta att han genast har tillgċng till all information i systemet. Bara för att man skall vara säker pċ att informationen inte sprids till vem som helst har man vissa sċkallade klassificieringsnivċer, som innebär att t.ex. chefen har den högsta nivċn och alltsċ tillträde till allt medan de lite mindre, vanliga arbetarna bara har en normal nivċ. De har alltsċ inte tillträde till mer än de behöver för sitt arbete. Hur mċnga sċdana här nivċer man har beror helt pċ företaget. Nu har vi talat sċ mycket om lösenord att jag liksom känner mig tvingad till en lite längre förklaring av detta. Lösenordsrutiner När man nu har ett sċdant här auktorisations system infört mċste man ju självklart ha nċgon form av identifikation. Det enklaste är att man helt enkelt skriver in sitt namn och eventuellt ett lösenord. Men det är inte sċ dumt att ha lösenord lite här och där i systemet. T.ex. ett gemensamt lösenord för inloggning i systemet, plus de andra identifikationerna dċ. Sedan är det ju rätt smart att ha ett kodord istället för sitt vanliga namn. Att byta lösenord ofta är ju inte heller helt fel. För att verkligen skydda sig borde man ha kombinerat av antingen gemensamma eller personliga lösenord för följande. - Terminalen (Tangentbord och skärm) - Datasystemet, och/eller delar av det - Omrċden i arbetsminnet - Program, och/eller delar av det - Informationsregister, och/eller delar av det - Speciella kategorier av informaton - Speciella funktioner Detta leder oss till det gigantiska problemet men har med personal. Det verkar som om de anställda är mer eller mindre inkompetenta när det gäller att komma ihċg sina lösenord. Man skall dessutom helst undvika att skriva ner sina lösenord. Och om man nu nödvändigtvis mċste göra detta sċ skall papperslappen förvaras pċ ett mycket mycket säkert ställe, tex ett kassaskċp. Men inte ens detta fungerar. Det är tom ganska vanligt att det sitter lappar med lösenord pċ terminalerna. Antalet symboler i ett lösenord bör vara minst fem, anledningen till att det är sċ fċ symboler är för att personalen lättare skall komma ihċg sina lösenord. Det börjar bli väldigt vanligt med att lösenord byts ut med helst ojämna mellanrum, ansvaret för att detta verkligen sker har den säkerhetsansvarige pċ företaget. De system som inte använder sig av detta har ocksċ en mycket högre statistik pċ datorintrċng än de med ändringssystemet. Varför byter inte dċ alla till det här nya bättre system med oregelbundna byten av lösenord? För det första sċ blir det ju ännu mer problem för personal att komma ihċg sina lösenord, det är mycket slarv vid installationer av säkerhetssystem och den kanske löjligaste men ändċ väldigt vanliga anledningen till att man inte byter till det bättre systemet, det är sċ svċrt att komma pċ ett bra men ändċ enkelt lösenord. Själv använder jag mig av ett system som jag tycker är väldigt bra. För det första tar man och letar upp en bok i sin bokhylla, som man sedan använder varje gċng man skall byta lösenord. Man bestämmer nċgra sidor man skall använda, t.ex. 23, 110 och 132 dessa sidor fċr man inte glömma, och det är inte sċ farligt att anteckna dem nċgonstans eftersom en eventuell inbrottstjuv ändċ inte kommer att tänka pċ följande. Du väljer ut ett passande ord frċn en sida, tar detta ord och kombinerar det pċ nċgot fyndigt sätt med sidnummret. Ett exempel pċ en kombination av tex 132 och björn kan ju bli 132örn eller nċgot liknande. Det är rätt lätt att slċ upp denna sida och leta upp ordet man använde och lista ut sitt lösenord igen ifall man skulle glömma det. Diskussion Det har varit intressant att göra ett sċdant här arbete även om det har varit svċrt att göra en bra sammanställning av materialet. Jag vet inte om det är lämpligt att vara sċ mycket som fyra i en grupp, det blir roligare dċ men svċrare att sammanställa alla texterna när det kommer frċn sċ mċnga hċll, kommunikationen funkade bra tycker jag, detta beror mycket pċ att vi bor i närheten av varandra det kan nog bli svċrare om man bor i olika stadsdelar och är tvungen att ringa och diskutera hela arbetet över telefon och i skolan. Man märker ocksċ hur lat man kan vara, man skjuter upp allt sċ länge som möjligt, varför göra nċgot idag som du kan göra imorgon . Jag anser att bland det svċraste var att göra sammanfattningen, för det första vill ingen göra den, för det andra är det svċrt att göra en sammanfattning pċ ett redan nedskuret material. För att dċ titta lite pċ själva arbetet, jag har fċtt tvċ intryck medan jag har arbetat mad detta ämnet, det ena är att allmänheten tror att databrott förekommer mycket mer än det i själva verket gör, men detta kan bero pċ att dom flesta kanske inte riktigt vet vad ett databrott bestċr av, för ovanstċende pċstċende undantar jag programmkopiering som är mycket vanligt, det andra är att databrott, där man använder datorn som verktyg, det ses inte sċ allvarligt pċ, det kan till och med uppfattas som ganska sċ lustigt och fascinerande när nċgon som är skicklig att hantera en dator lyckas utföra ett intrċng hos nċgot företag, jag tillhör själv den här skaran av människor som tycker det är ganska fascinerande hur man kan ta sig förbi diverse spärrar och lösen ord och komma in i hemliga system och filer. Detta är allt jag har att säga i min diskussionsbit. Fredrik Jaha... Sċ var det dags för detta ocksċ...Jag tycker att det har varit ganska kul att göra det här arbetet, iallafall sċ länge som det flöt. När vi skulle göra sammanfattningen sċ gick det ju VÄLDIGT trögt. Det har varit väldigt svċrt att fċ nċ't gjort.. vi sköt bara upp det till senare.. Men till slut tog vi tag i det och arbetade en hel del. Men sċ blev det stopp igen.. Det var tur att vi inte hade sċ mycket kvar dċ...Men iallafall.. i genomsnitt sċ har det gċtt hyfsat. Jag har ingen större lust att jobba i samma grupp igen.. Det var inte sċ bra som jag trodde det skulle bli att arbeta med nċgra man känner sċ pass bra som vi gör. Jag tror att när man jobbar med nċgon som man inte känner sċ bra sċ gör man sitt bästa för att inte verka lat. När det gäller själva arbetet sċ var det svċrt att fċ det att smälta ihop.. Men jag tycker att det till slut blev rätt bra ändċ. Som jag fattade det sċ verkar det som media spred dċlig information om hackers. Hackers hċller ju som vi skrivit tidigare inte pċ med databrott för att skaffa pengar. Och när det gäller andra databrott och sekretess sċ verkade det som om folk inte verkade bry sig speciellt mycket om det. Men som Fredrik skrivit sċ tycker jag ocksċ att det är fascinerande med personer som bryter sig in i datasystem. Detta var det jag hade att säga.. Anders Ok, dċ fċr väl jag ocksċ göra ett intelligent inlägg i diskussionssidan. När vi nu har jobbat ihop det här arbetet sċ kan jag säga att det fċr duga. Det är ju inte det bästa jag gjort, men inte det sämsta heller. Av mig fċr det medelbetyg och detta av vissa orsaker. En av orsakerna är materialet. Vi fick inte tag pċ sċ mycket material som vi hade planerat frċn början. Alla de bra böckerna var utlċnade, sċ vi hade god tillgċng pċ dċliga böcker. Vi fick helt enkelt välja ut det bästa av det sämsta, vilket inte blev mycket. Att skriva med dċlig information som grund var inte lätt, men vi gjorde vċrat bästa. Pċ det stora hela kan jag säga att jag är nöjd. Arbetsförloppet var det heller inget fel pċ. Nja... Vi kanske sköt upp saker och ting lite för mycket, men det var ju aldrig nċgot problem. Det vi sköt upp fixade vi lungt senare ändċ. Vi hade läget under full kontroll. Jag hade ju förstċs förväntat mig lite bättre arbete eftersom vi bor rätt nära varandra. Innehċllet i arbetet som sċdant har jag inte sċ avvikan de ċsikter om. Jag tycker att dataintrċng och andra databrott är kriminella och skall stoppas. Det skall inte betraktas som en förmildrande omständighet att man använt en dator i sitt brott. Det är fortfarande lika brottsligt som allt annat. En bra sak är att databrott inte är sċ vanligt i Sverige. I och för sig sċ kan man väl ändċ säga att det kan vara förmildrande när en ungdom endast har brutit sig in i ett system utan att förorsaka nċgon skada. Piratkopiering är i Sverige inte nċgot brott, och det är nog den klart bästa regeln de har i datalagen. Jag tycker inte f:\12000 essays\technology & computers (295)\Devopment of Computers and Technology.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computers in some form are in almost everything these days. From Toasters to Televisions, just about all electronic things has some form of processor in them. This is a very large change from the way it used to be, when a computer that would take up an entire room and weighed tons of pounds has the same amount of power as a scientific calculator. The changes that computers have undergone in the last 40 years have been colossal. So many things have changed from the ENIAC that had very little power, and broke down once every 15 minutes and took another 15 minutes to repair, to our Pentium Pro 200's, and the powerful Silicon Graphics Workstations, the core of the machine has stayed basically the same. The only thing that has really changed in the processor is the speed that it translates commands from 1's and 0's to data that actually means something to a normal computer user. Just in the last few years, computers have undergone major changes. PC users came from using MS-DOS and Windows 3.1, to Windows 95, a whole new operating system. Computer speeds have taken a huge increase as well, in 1995 when a normal computer was a 486 computer running at 33 MHz, to 1997 where a blazing fast Pentium (AKA 586) running at 200 MHz plus. The next generation of processors is slated to come out this year as well, being the next CPU from Intel, code named Merced, running at 233 MHz, and up. Another major innovation has been the Internet. This is a massive change to not only the computer world, but to the entire world as well. The Internet has many different facets, ranging from newsgroups, where you can choose almost any topic to discuss with a range of many other people, from university professors, to professionals of the field of your choice, to the average person, to IRC, where you can chat in real time to other people around the world, to the World Wide Web, which is a mass of information networked from places around the world. Nowadays, no matter where you look, computers are somewhere, doing something. Changes in computer hardware and software have taken great leaps and jumps since the first video games and word processors. Video games started out with a game called Pong...monochrome (2 colors, typically amber and black, or green and black), you had 2 controller paddles, and the game resembled a slow version of Air Hockey. The first word processors had their roots in MS-DOS, these were not very sophisticated nor much better than a good typewriter at the time. About the only benefits were the editing tools available with the word processors. But, since these first two dinosaurs of software, they have gone through some major changes. Video games are now placed in fully 3-D environments and word processors now have the abilities to change grammar and check your spelling. Hardware has also undergone some fairly major changes. When computers entered their 4th generation, with the 8088 processor, it was just a base computer, with a massive processor, with little power, running at 3-4 MHz, and there was no sound to speak of, other than blips and bleeps from an internal speaker. Graphics cards were limited to two colors (monochrome), and RAM was limited to 640k and less. By this time, though, computers had already undergone massive changes. The first computers were massive beasts of things that weighed thousands of pounds. The first computer was known as the ENIAC, it was the size of a room, used punched cards as input and didn't have much more power than a calculator. The reason for it being so large is that it used vacuum tubes to process data. It also broke down very often...to the tune of once every fifteen minutes, and then it would take 15 minutes to locate the problem and fix it. This beast also used massive amount of power, and people used to joke that the lights would dim in the city of origin whenever the computer was used. The Early Days of Computers The very first computer, in the roughest sense of the term, was the abacus. Consisting of beads strung on wires, the abacus was the very first desktop calculator. The first actual mechanical computer came from an individual named Blaise Pascal, who built an adding machine based on gears and wheels. This invention did not become improved significantly until a person named Charles Babbage came along, who made a machine called the difference engine. It is for this, that Babbage is known as the "Father of the Computer." Born in England in 1791, Babbage was a mathematician, and an inventor. He decided a machine could be built to solve polynomial equations more easily and accurately by calculating the differences between them. The model of this was named the Difference Engine. The model was so well received that he began to build a full scale working version, with money that he received from the British Government as a grant. Babbage soon found that the tightest design specifications could not produce an accurate machine. The smallest imperfection was enough to throw the tons of mechanical rods and gears, and threw the entire machine out of whack. After spending 17,000 pounds, the British Government withdrew financial support. Even though this was a major setback, Babbage was not discouraged. He came up with another machine of wheels and cogs, which he would call the analytical engine, which he hoped would carry out many different kinds of calculations. This was also never built, at least by Babbage (although a model was put together by his son, later), but the main thing about this was it manifested five key concepts of modern computers -- · Input device · Processor or Number calculator · Storage unit to hold number waiting to be processed · Control unit to direct the task waiting to be performed and the sequence of calculations · Output device Parts of Babbage's inventions were similar to an invention built by Joseph Jacquard. Jacquard, noting the repeating task of weavers working on looms, came up with a stiff card with a series of holes in it, to block certain threads from entering the loom and blocked others from completing the weave. Babbage saw that the punched card system could be used to control the calculations of the analytical engine, and brought it into his machine. Ada Lovelace was known as the first computer programmer. Daughter of an English poet (Lord Byron), went to work with Babbage and helped develop instructions for doing calculations on the analytical engine. Lovelace's contributions were very great, her interest gave Babbage encouragement; she was able to see that his approach was workable and also published a series of notes that led others to complete what he prognosticated. Since 1970, the US Congress required that a census of the population be taken every ten years. For the census for 1880, counting the census took 71/2 years because all counting had to be done by hand. Also, there was considerable apprehension in official society as to whether the counting of the next census could be completed before the next century. A competition was held to find some way to speed the counting process. In the final test, involving a count of the population of St. Louis, Herman Hollerith's tabulating machine completed the count in only 51/2 hours. As a result of his systems adoption, an unofficial count of the 1890 population was announced only six weeks after the census was taken. Like the cards that Jacquard used for the loom, Hollerith's punched cards also used stiff paper with holes punched at certain points. In his tabulating machine, roods passed through the holes to complete a circuit, which caused a counter to advance one unit. This capability pointed up the principal difference between the analytical engine and the tabulating machine; Hollerith was able to use electrical power rather than mechanical power to drive the device. Hollerith, who had been a statistician with the Census Bureau, realized that the punched card processing had high potential for sales. In 1896, he started the Tabulating Machine Company, which was very successful in selling machines to railroads and other clients. In 124, this company merged with two other companies to form the International Business Machines Corporation, still well known today as IBM. IBM, Aiken & Watson For over 30 years, from 1924 to 1956, Thomas Watson, Sr., ruled IBM with an iron grip. Before becoming the head of IBM, Watson had worked for the Tabulating Machine Company. While there, he had a running battle with Hollerith, whose business talent did not match his technical abilities. Under the lead of Watson, IBM became a force to be reckoned with in the business machine market, first as a purveyor of calculators, then as a developer of computers. IBM's entry into computers was started by a young person named Howard Aiken. In 1936, after reading Babbage's and Lovelace's notes, Aiken thought that a modern analytical engine could be built. The important difference was that this new development of the analytical engine would be electromechanical. Because IBM was such a power in the market, with lots of money and resources, Aiken worked out a proposal and approached Thomas Watson. Watson approved the deal and give him 1 million dollars in which to make this new machine, which would later be called the Harvard Mark I, which began the modern era of computers. Nothing close to the Mark I had ever been built previously. It was 55 feet long and 8 feet high, and when it processed information, it made a clicking sound, equivalent to (according to one person) a room full of individuals knitting with metal needles. Released in 1944, the sight of the Mark I was marked by the presence of many uniformed Navy officers. It was now W.W.II and Aiken had become a naval lieutenant, released to Harvard to help build the computer that was supposed to solve the Navy's obstacles. During the war, German scientists made impressive advances in computer design. In 1940 they even made a formal development proposal to Hitler, who rejected farther work on the scheme, thinking the war was already won. In Britain however, scientists succeeded in making a computer called Colossus, which helped in cracking supposedly unbreakable German radio codes. The Nazis unsuspectingly continued to use these codes throughout the war. As great as this accomplishment is, imagine the possibilities if the reverse had come true, and the Nazis had the computer technology and the British did not. In the same time frame, American military officers approached Dr. Mauchly at the University of Pennsylvania and asked him to develop a machine that would quickly calculate the trajectories for artillery and missiles. Mauchly and his student, Presper Eckert, relied on the work of Dr. John Atanasoff, a professor of physics at Iowa State University. During the late '30's, Atanasoff had spent time trying to build an electronic calculating device to help his students solve complicated math problems. One night, the idea came to him for linking the computer memory and the associated logic. Later, he and an associate, Clifford Berry, succeeded in building the "ABC," for Atanasoff-Berry Computer. After Mauchly met with Atanasoff and Berry, he used the ABC as the basis for the next computer development. From this association ultimately would come a lawsuit, considering attempts to get patents for a commercial version of the machine that Mauchly built. The suit was finally decided in 1974, when it was decided that Atanasoff had been the true developer of the ideas required to make an electronic digital computer actually work, although some computer historians dispute this decision. But during the war years, Mauchly and Eckert were able to use the ABC principals in dramatic effect to create the ENIAC. Computers Become More Powerful The size of ENIAC's numerical "word" was 10 decimal digits, and it could multiply two of these numbers at a rate of 300 per second, by finding the value of each product from a Multiplication table stored in its memory. ENIAC was about 1000 times faster than the previous generation of computers. ENIAC used 18,000 vacuum tubes, about 1,800 square feet of floor space, and consumed about 180,000 watts of electrical power. It had punched card input, 1 multiplier, 1 divider/square rooter, and 20 adders using decimal ring counters, which served as adders and also as quick-access (.0002 seconds) read-write register storage. The executable instructions making up a program were embodied in the separate "units" of ENIAC, which were plugged together to form a "route" for the flow of information. The problem with the ENIAC was that the average life of a vacuum tube is 3000 hours, and a vacuum tube would then burn out once every 15 minutes. It would take on average 15 minutes to find the burnt out tube and replace it. Enthralled by the success of ENIAC, the mathematician John Von Neumann undertook, in 1945, a study of computation that showed that a computer should have a very basic, fixed physical construction, and yet be able to carry out any kind of computation by means of a proper programmed control without the need for any change in the unit itself. Von Neumann contributed a new consciousness of how sensible, yet fast computers should be organized and assembled. These ideas, usually referred to as the stored-program technique, became important for future generations of high-speed digital computers and were wholly adopted. The Stored-Program technique involves many features of computer design and function besides the one that it is named after. In combination, these features make very high speed operations attainable. An impression may be provided by considering what 1,000 operations per second means. If each instruction in a job program were used once in concurrent order, no human programmer could induce enough instruction to keep the computer busy. Arrangements must be made, consequently, for parts of the job program (called subroutines) to be used repeatedly in a manner that depends on the way the computation goes. Also, it would clearly be helpful if instructions could be changed if needed during a computation to make them behave differently. Von Neumann met these two requirements by making a special type of machine instruction, called a Conditional control transfer -- which allowed the program sequence to be stopped and started again at any point - and by storing all instruction programs together with data in the same memory unit, so that, when needed, instructions could be changed in the same way as data. As a result of these techniques, computing and programming became much faster, more flexible, and more efficient with work. Regularly used subroutines did not have to be reprogrammed for each new program, but could be kept in "libraries" and read into memory only when needed. Hence, much of a given program could be created from the subroutine library. The computer memory became the collection site in which all parts of a long computation were kept, worked on piece by piece, and put together to form the final results. When the advantage of these techniques became clear, they became a standard practice. The first generation of modern programmed electronic computers to take advantage of these improvements was built in 1947. This group included computers using Random- Access-Memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. . These machines had punched-card or tape I/O devices. Physically, they were much smaller than ENIAC. Some were about the size of a grand piano and used only 2,500 electron tubes, a lot less then required by the earlier ENIAC. The first-generation stored-program computers needed a lot of maintenance, reached probably about 70 to 80% reliability of operation (ROO) and were used for 8 to 12 years. This group of computers included EDVAC and UNIVAC, the first commercially available computers. Early in the 50's two important engineering discoveries changed the image of the electronic-computer field, from one of fast but unreliable hardware to an image of relatively high reliability and even more capability. These discoveries were the magnetic core memory and the Transistor - Circuit Element. These technical discoveries quickly found their way into new models of digital computers. RAM capacities increased from 8,000 to 64,000 words in commercially available machines by the 1960's, with access times of 2 to 3 MS (Milliseconds). These machines were very expensive to purchase or even to rent and were particularly expensive to operate because of the cost of expanding programming. Such computers were mostly found in large computer centers operated by industry, government, and private laboratories -- staffed with many programmers and support personnel. This situation led to modes of operation enabling the sharing of the high potential available. During this time, another important development was the move from machine language to assembly language, also known as symbolic languages. Assembly languages use abbreviations for instructions rather than numbers. This made programming a computer a lot easier. After the implementation of assembly languages came high-level languages. The first language to be universally accepted was a language by the name of FORTRAN, developed in the mid 50's as an engineering, mathematical, and scientific language. Then, in 1959, COBOL was developed for business programming usage. Both languages, still being used today, are more English like than assembly. Higher level languages allow programmers to give more attention to solving problems rather than coping with the minute details of the machines themselves. Disk storage complimented magnetic tape systems and enabled users to have rapid access to data required. All these new developments made the second generation computers easier and less costly to operate. This began a surge of growth in computer systems, although computers were being mostly used by business, university, and government establishments. They had not yet been passed down to the general public. The real part of the computer revolution was about to begin. One of the most abundant elements in the earth is silicon; a non-metal substance found in sand as well as in most rocks and clay. The element has given rise to the name "Silicon Valley" for Santa Clara County, about 50 km south of San Francisco. In 1965, Silicon valley became the principle site of the computer industry, making the so-called silicon chip. An integrated circuit is a complete electronic circuit on a small chip of silicon. The chip may be less than 3mm square and contain hundreds to thousands of electronic components. Beginning in 1965, the integrated circuit began to replace the transistor in machines was now called third-generation computers. An Integrated Circuit was able to replace an entire circuit board of transistors with one chip of silicon much smaller than one transistor. Silicon is used because it is a semiconductor. It is a crystalline substance that will conduct electric current when it has been doped with chemical impurities shot onto the structure of the crystal. A cylinder of silicon is sliced into wafers, each about 76mm in diameter. The wafer is then etched repeatedly with a pattern of electrical circuitry. Up to ten layers may be etched onto a single wafer. The wafer is then divided into several hundred chips, each with a circuit so small it is half the size of a fingernail; yet under a microscope, it is complex as a railroad yard. A chip 1 centimeter square it is so powerful that it can hold 10,000 words, about the size of an average newspaper. Integrated circuits entered the market with the simultaneous announcement in 1959 by Texas Instruments and Fairchild Semiconductor that they had each independently produced chips containing several complete electronic circuits. The chips were hailed as a generational breakthrough because they had four desirable characteristics. · Reliability - They could be used over and over again without failure, whereas vacuum tubes failed ever fifteen minutes. Chips rarely failed -- perhaps one in 33 million hours of operation. This reliability was due not only to the fact that they had no moving parts but also that semiconductor firms gave them a rigid work/not work test. · Compactness - Circuitry packed into a small space reduces equipment size. The machine speed is increased because circuits are closer together, thereby reducing the travel time for the electricity. · Low Cost - Mass-production techniques has made possible the manufacture of inexpensive integrated circuits. That is, miniaturization has allowed manufacturers to produce many chips inexpensively. · Low power use -- Miniaturization of integrated circuits has meant that less power is required for computer use than was required in previous generations. In an energy-conscious time, this was important. The Microprocessor Throught the 1970's, computers gained dramatically in speed, reliability, and storage capacity, but entry into the fourth generation was evolutionary rather than revolutionary. The fourth generation was, in fact, furthering the progress of the third generation. Early in the first part of the third generation, specialized chips were developed for memory and logic. Therefore, all parts were in place for the next technological development, the microprocessor, or a general purpose processor on a chip. Ted Hoff of Intel developed the chip in 1969, and the microprocessor became commercially available in 1971. Nowadays microprocessors are everywhere. From watches, calculatores and computers, processors can be found in virtually every machine in the home or business. Environments for computers have changed, with no more need for climate-controlled rooms and most models of microcomputers can be placed almost anywhere. New Stuff After the technoligical improvements in the 60's and the 70's, computers haven't gotten much different, aside from being faster, smaller and more user friendly. The base architecture of the computer itself is fundementally the same. New improvements from the 80's on have been more "Comfort Stuff", those being sound cards (For hi-quality sound and music), CD-ROMs (large storage capicity disks), bigger monitors and faster video cards. Computers have come a long way, but there has not really been alot of vast technological improvements, architecture-wise. f:\12000 essays\technology & computers (295)\Does Microsoft Have Too Much Power .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Initially, there is nothing. Then, there is Bill Gates the founder of Microsoft. Once a young, eager teenager running a small business of other teenagers, now the richest man in the world controlling an operating system practically every IBM compatible computer in the world uses. Computers are not the only thing that Microsoft desires. Now, they wish to influence the Internet. With all the opportunities that it offers, many companies race to develop software to get people and businesses on the Internet. Many dislike the power Microsoft has come to possess and might gain more of, but is there anything anybody can do? IBM has taken on the leader of software with an innovative new operating system known as OS/2, but will they have a chance? Microsoft may be unstoppable with its foundation, influence and power but is that enough to practically own the computerized world as we know it? Usually, when we mention Microsoft in any form, we must have the registered trademark symbol right the word. The name is a well-known word in virtually everyone's life. Although it is the super-empire it is today, Microsoft was once a small software business ran by a young Bill Gates in a tiny office. Consisting of a few young adults, they were not progressing as much as they would like too. Their competitor, Digital Research, created the first operating system, known as the CP/M-86 system. Though, not glamorized, CP/M did exist. Their competitors had it a little worse, working out of their not so tidy two story house, made up of a husband and wife. The massive change occurred when a couple of IBM representatives showed up at the door of the CP/M founders only to be turned away. Very rare to happen, since IBM was so highly respected by programmers at the time. IBM is introduced to a young man named Bill Gates, mistaken for an office helper but later strikes a serious offer for Microsoft products. The one program that was unavailable at the time would be an operating system soon to be called QDOS, a raw form of the Disk Operating System we know today. When called upon by IBM, Bill Gates discovers that a man had created an operating system to be pre-installed with the new IBM, scheduled to be released in 1981. The operating system would be similar to the CP/M-86 system created by Digital Research. The deal will make Bill Gates the wealthiest man in the United States, with an estimated worth of over thirteen billion dollars. Today, The Microsoft Cooperation is the worlds most lucrative software empire and yet still has dreams for the future. Computers today are very popular among homeowners, businesses and schools. Microsoft began to suffice to the population by creating user-friendly programs such as the ever popular Windows. This graphical interface served as a bridge to the computer illiterate and then began the reign of Microsoft over the population. Untouched by wrath of Microsoft would later be a small minority of UNIX users and other DOS like programs. Various programs were made just for Windows which of course ran in DOS. OS/2 at this point was already made, not well known and not very popular. Ironically, Bill Gates worked closely with IBM in 1983, to help develop OS/2, even conceding to IBM that their OS/2 would one day overtake Microsoft's own attempt at a graphical interface, Windows. However, Windows advanced in its versions and graphics capabilities as well as DOS. In 1995, Microsoft announces its new creation which will revolutionize computers everywhere. Windows 95 is introduced as a powerful operating system, with an astounding graphical and user-friendly interface. Although, the propreitary nature of the Apple Macintosh operating system and OS/2 led to small market acceptance, and Windows and DOS become the world leading Personal Computer operating systems. The message Microsoft is trying to send to consumers is simple: "Windows 95 is it, if you don't use it, buy it, if your computer can't run it, replace it." At present time, Microsoft has furthers its shadowy terrains toward Windows 97 which includes some minor adjustments such as faster loading capabilities along with better Internet/TCP/IP components. Along with the Windows Empire, Microsoft is moving towards the Internet. There are currently two competitors fighting for control of this vast information network, Microsoft and Netscape. By controlling the Internet, a cooperation would have to seize control of all Browsing, server, client programs and any other application granting access of the Internet to the consumer. In most online haunts, supporters and users of niche products like OS/2, Macintosh and all other competitive operating systems are drowned out by jeering proponents of Windows. In its hopes to try to succumb users, Microsoft has stated that Microsoft Explorer is free for the taking. Netscape Navigator is free for a trial period at which users will have to lighten their wallets if they wish to use the browser further. The future may not look bright for the Internet if Microsoft takes command. Microsoft is definitely the software empire of the 20th century. The question is, should they have that much power? This will take time in order to see the answer. Having a history of power and business intuition, Microsoft may never be defeated. Advancements in the last ten years alone have been remarkable, to think how far we will be in the next decade is a pondering thought. Will Microsoft still be the software giant it is today or will somebody take its place? Ventures of business can easily be described as unpredictable. Although, users may have to ask themselves this one question, does Microsoft have the power to monopolize the computerized generation? f:\12000 essays\technology & computers (295)\DTP Project.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ANALYSIS OF THE SYSTEM My new objectives for the new system are web page for the Doha College and Newsletter for the parents. I've made survey to know how the people want the Newsletter and Web page of Doha College to be. The News-letter which I am going to produce it's going to be same as people they want it to be colourful and it have background information of the Doha College. I am going to produce a newsletter by using Microsoft Word, Microsoft Publisher and 3D Studio. I am going to use these software because they are really useful, for example Microsoft Word has tools are useful which another software dose not have it and because it is really easy to use, and for another example 3D Studio, I am going to use 3D Studio because it is the most powerful software in creating graphics animation. I am going to use scanner, Camera-Video and printer to help me to produce my new system. The software which I am going to use to produce a Web Page for Doha College are Microsoft Front Page, Microsoft Power Point, 3D Studio and Excel. I am going to use Front Page because I can create a professional Web page without programming so that means it is much easier and quicker to produce a Professional Web Page for Doha College. I am going to use MS Power Point to produce a Web Page because MS Power Point is really powerful in creating graphics pictures, I am going to create a picture through MS Power Point then I am going to move it to Front Page same thing with 3D Studio, I am going to create a graphic pictures and copy it to Front Page. I might use MS Word to write some documents to put in Doha College Web Site. I am going to use scanner, Camera-Video and printer to help me to produce my two new systems. To produce my new two systems very fast and perfectly published I require a very high a high speed of CPU (central processor unit), a lot of RAM's (Random Access Memory) etc. Here the Technical Specifications in Hardware I require : 1 . CPU : Pentium 133Mhz 2 . Motherboard : Pentium - cache memory 256 k 3 . RAM : 16Mb minimum 4 . VGA card: Stealth 3D 3000 (4MB VRAM) / 64 bit processing 5 . HDD : 1.2Gb minimum 6 . I/O card: 32 bit 7 . Modem : 33.6 BPS 8 . Scanner : 16 million colour 9 . Printer : Ink jet printer/ 700p * 700p 10 . Camera : Which it can be connected to the computer 11 . CD-ROM : 6x speed CD-ROM minimum These the specifications which I require to produce my two new systems. The Software which I need to produce my new two systems are : 1 . Microsoft Windows 95 2 . DOS Version 6.22(Disk Operating System) 3 . Microsoft Office 95 ( Professional ) 4 . 3D Studio version 4 5 . Microsoft Front Page 6 . Corel Draw 7 . Graphics Work Shop 8 . Paint Shop Pro These are the software which I need to produce my two new systems. Here are the terms of Hardware that in the machine which I am going to produce my two new systems : 1 . Central Processor Unit (CPU) : is the main part of the computer, consisting of the registers, arithmetic logic (ALU)unit and control unit. 2 . Motherboard : Is the printed circuit board (PCB) that holds the principal components in a microprocessor, such as microprocessor and clock chips, will be either plugged into the motherboard or soldered to it. The other name of th Mother Board is Main Board. 3 . RAM ( Random Access Memory) : Is the memory that has the same access time for all locations. 4 . Video Adapter : Is the circuitry which generates the signals needed for a video output to display computer date. VRAM is a separate high-speed memory into which the processor writes the screen data, which is then read to the screen for display. This avoids the use of any main memory to hold screen data. 5 . I/O card : Is a peripheral unit which can be used both as an input or as an output device. 6 . Modem (Modulator-Demodulator) : Is a data communication device for sending and receiving data between computers over telephone circuits. 7 . Scanner : A scanner scans a drawing and turns it into a bit map. 8 . HDD (Hard Disk Drive) : Is the unit made up of the mechanism that rotates the disks between the read/writes heads, and the mechanism that controls the heads. It uses rigid magnetic disk enclosed is a sealed container. 9 . CD-ROM : Is a CD-ROM drive with a mechanism for automatically changing the current disk for another selected disk. Is made up of the mechanism. 10 . Camera Capture: Is a camera connected to the computer through parallel cable, the camera convert the pictures which is been captured to digital so it can send it to the computer through the parallel cable (Parallel is connected to the I/O card). An explanation of how the software which I am going to use will be used:- Microsoft Word : To run Microsoft Word u have to run first Windows because MS Word works under windows. To insert objects click on Insert/object.. You can insert a lot of objects for example video clip, sound clip, clip arts etc. Microsoft Publisher : MS Publisher runs under Windows, Microsoft Power Point : It runs Under Windows, 3D Studio : It works under DOS. End. f:\12000 essays\technology & computers (295)\Ecodisk.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ECODISC Ecodisc is a program which allows the user to take on the role of a Nature Reserve Manager. It was designed by a man named Peter Bratt, and Englishman in South Devon. Ecodisc is designed so that the user can see what effects certain changes can make on the environment with out actually making the changes. Ecodisc is a good educational tool showing new users the effects of certain decisions. It can also be used a map, because it lets you see various parts of the nature reserve without actually going there. Ecodisc allows the user to take on the role of a nature reserve manger, which is the person who basically decides what changes will be made to the nature reserve. With aid of the Ecodisc, the results of decisions can be shown without actually doing anything, or doing any harm to the environment. Ecodisc allows users to explore various parts of the nature reserve and view it from different positions. You can see the area from any direction (north, south, east or west), and even from a helicopter position. Ecodisc lets you see the areas of the reserve from any part of the year. For example, you could view the reserve in the middle of winter and see what it looks like in summer. Ecodisc is one of the first interactive programmes, and there are hopes of some day there being interactive broadcast television. This is a breakthrough in visual entertainment, because while television lets you see a place, interactive video will let you explore it. Interactive video is where the viewer decides the plot and characters of a movie, or show. The viewer will basically be able to write their own scripts and produce the movie at the same time. Ecodisc would be very good for showing students (or anyone) interested in managing nature reserves, working for national parks or just as an interest thing. Ecodisc is an invention which would greatly help both the computer and television industries as well as the nature and wildlife organisations across the world. Already there are programmes that enable the user to take control of what is happening, and Ecodisc, being one of the first, has greatly aided the production of the others. Ecodisc is the start of a new way of life in visual entertainment and may also aid things like scientific research and study. f:\12000 essays\technology & computers (295)\Electronic Commerce.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Initially, the Internet was designed to be used by government and academic users, but now it is rapidly becoming commercialized. It has on-line "shops", even electronic "shopping malls". Customers, browsing at their computers, can view products, read descriptions, and sometimes even try samples. What they lack is the means to buy from their keyboard, on impulse. They could pay by credit card, transmitting the necessary data by modem; but intercepting messages on the Internet is trivially easy for a smart hacker, so sending a credit-card number in an unscrambled message is inviting trouble. It would be relatively safe to send a credit card number encrypted with a hard-to-break code. That would require either a general adoption across the internet of standard encoding protocols, or the making of prior arrangements between buyers and sellers. Both consumers and merchants could see a windfall if these problems are solved. For merchants, a secure and easily divisible supply of electronic money will motivate more Internet surfers to become on-line shoppers. Electronic money will also make it easier for smaller businesses to achieve a level of automation already enjoyed by many large corporations whose Electronic Data Interchange heritage means streams of electronic bits now flow instead of cash in back-end financial processes. We need to resolve four key technology issues before consumers and merchants anoint electric money with the same real and perceived values as our tangible bills and coins. These four key areas are: Security, Authentication, Anonymity, and Divisibility. Commercial R&D departments and university labs are developing measures to address security for both Internet and private-network transactions. The venerable answer to securing sensitive information, like credit-card numbers, is to encrypt the data before you send it out. MIT's Kerberos, which is named after the three-headed watchdog of Greek mythology, is one of the best-known-private-key encryption technologies. It creates an encrypted data packet, called a ticket, which securely identifies the user. To make a purchase, you generate the ticket during a series of coded messages you exchange with a Kerberos server, which sits between your computer system and the one you are communicating with. These latter two systems share a secret key with the Kerberos server to protect information from prying eyes and to assure that your data has not been altered during the transmission. But this technology has a potentially weak link: Breach the server, and the watchdog rolls over and plays dead. An alternative to private-key cryptography is a public-key system that directly connects consumers and merchants. Businesses need two keys in public-key encryption: one to encrypt, the other to decrypt the message. Everyone who expects to receive a message publishes a key. To send digital cash to someone, you look up the public key and use the algorithm to encrypt the payment. The recipient then uses the private half of the key pair for decryption. Although encryption fortifies our electronic transaction against thieves, there is a cost: The processing overhead of encryption/decryption makes high-volume, low-volume payments prohibitively expensive. Processing time for a reasonably safe digital signature conspires against keeping costs per transaction low. Depending on key length, an average machine can only sign between twenty and fifty messages per second. Decryption is faster. One way to factor out the overhead is to use a trustee organization, one that collects batches of small transaction before passing them on to the credit-card organization for processing. First Virtual, an Internet-based banking organization, relies on this approach. Consumers register their credit cards with First Virtual over the phone to eliminate security risks, and from then on, they uses personal identification numbers (PINs) to make purchases. Encryption may help make the electric money more secure, but we also need guarantees that no one alters the data--most notably the denomination of the currency--at either end of the transaction. One form of verification is secure hash algorithms, which represent a large file of multiple megabytes with a relatively short number consisting of a few hundred bits. We use the surrogate file--whose smaller size saves computing time--to verify the integrity of a larger block of data. Hash algorithms work similarly to the checksums used in communications protocols: The sender adds up all the bytes in a data packet and appends the sum to the packet. The recipient performs the same calculation and compares the two sums to make sure everything arrived correctly. One possible implementation of secure hash functions is in a zero-knowledge-proof system, which relies on challenge/response protocols. The server poses a question, and the system seeking access offers an answer. If the answer checks out, access is granted. In practice, developers could incorporate the common knowledge into software or a hardware encryption device, and the challenge could then consist of a random-number string. The device might, for example, submit the number to a secure hash function to generate the response. The third component of the electronic-currency infrastructure is anonymity--the ability to buy and sell as we please without threatening our fundamental freedom of privacy. If unchecked, all our transactions, as well as analyses of our spending habits, could eventually reside on the corporate databases of individual companies or in central clearinghouses, like those that now track our credit histories. Serial numbers offer the greatest opportunity for broadcasting our spending habits to the outside world. Today's paper money floats so freely throughout the economy that serial numbers reveal nothing about our spending habits. But a company that mints an electric dollar could keep a database of serial numbers that records who spent the currency and what the dollars purchased. It is then important to build a degree of anonymity into electric money. Blind signatures are one answer. Devised by a company named DigiCash, it lets consumers scramble serial numbers. When a consumer makes an E-cash withdrawal, the PC calculates the number of digital coins needed and generates random serial numbers for the coins. The PC specifies a blinding factor, a random number that it uses to multiply the coin serial numbers. A bank encodes the blinded numbers using its own secret key and debits the consumer's account. The bank then sends the authenticated coins back to the consumer, who removes the blinding factor. The consumer can spend bank-validated coins, but the bank itself has no record of how the coins were spent. The fourth technical component in the evolution of electric money is flexibility. Everything may work fine if transactions use nice round dollar amounts, but that changes when a company sells information for a few cents or even fractions of cents per page, a business model that's evolving on the Internet. Electric-money systems must be able to handle high volume at a marginal cost per transaction. Millicent, a division of Digital Equipment, may achieve this goal. Millicent uses a variation on the digital-check model with decentralized validation at the vendor's server. Millicent relies on third-party organizations that take care of account management, billing, and other administrative duties. Millicent transactions use scrip, digital money that is valid only for Millicent. Scrip consists of a digital signature, a serial number, and a stated value (typically a cent or less). To authenticate transactions, Millicent uses a variation of the zero-knowledge-proof system. Consumers receive a secret code when they obtain a scrip. This proves ownership of the currency when it's being spent. The vendor that issues the scrip value uses a master-customer secret to verify the consumer's secret. The system hasn't yet been launched commercially, but Digital says internal tests of transactions across TCP/IP networks indicate the system can validate approximately 1000 requests per second, with TCP connection handling taking up most of the processing time. Digital sees the system as a way for companies to charge for information that Internet users obtain from Web sites. Security, authentication, anonymity, and divisibility all have developers working to produce the collective answers that may open the floodgates to electronic commerce in the near future. The fact is that the electric-money genie is already out of the bottle. The market will demand electric money because of the accompanying new efficiencies that will shave costs in both consumer and supplier transactions. Consumers everywhere will want the bounty of a global marketplace, not one that's tied to bankers' hours. These efficiencies will push developers to overcome today's technical hurdles, allowing bits to replace paper as our most trusted medium of exchange. f:\12000 essays\technology & computers (295)\Electronic Monitoring Vs Health Concerns.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Electronic Monitoring vs. Health Concerns Is privacy and electronic monitoring in the work place an issue that is becoming a problem? More and more employees are being monitored today then ever before and the companies that do it aren't letting off. While electronic monitoring in the work place may be the cause of increased stress levels and tension, the benefits far exceed the harm that it may cause. Employees don't realize how often electronic monitoring happens in their work place. An estimated twenty million Americans are subjected to monitoring in their work place, commonly in the form of phone monitoring, E-mail searches, and searching through the files on their hard drive (Paranoid 435). A poll by MacWorld states that over twenty-one percent of all employees are monitored at work, and the larger the company, the higher the percentage (Privacy 445). Unaware of this electronic monitoring, most employees often are not working at their peak performance due to this type of scrutiny. The majority of Americans believe that electronic monitoring should not be allowed. Supreme Court Justice Louis D. Brandeis states that of all of the freedoms that Americans enjoy, privacy "is the right most valued by civilized men (Privacy 441)." A poll taken by Yankelovich Clancy Shulman for Time, states that ninety-five percent of Americans believe that electronic monitoring should not be allowed (Privacy 444). Harriet Ternipsede, who is a travel agent, gave a lengthy testimonial on how electronic monitoring at her job caused her undue stress and several health problems including muscle aches, mental confusion, weakened eyesight, severe sleep disturbance, nausea, and exhaustion. Ternipsede was later diagnosed with Chronic Fatigue Immune Dysfunction Syndrome (Electronic 446). A study done by the University of Wisconsin found that eighty-seven percent of employees subjected to electronic monitoring suffered from higher stress levels and increased tension while only sixty-seven percent of those employees that were not subjected to monitoring had those same symptoms (Paranoid 436). While it is obvious that most employees are against electronic monitoring, the use of electronic monitoring contributes to increased stress levels in employees. While the advantages derived from electronic monitoring far outweigh the disadvantages. Through the use of employee monitoring, companies can save money in overall operations cost by weeding out those employees who don't pull their weight, and cut down on employee theft. By monitoring employees, it is possible to measure their performance and see if they are meeting standards. By getting rid of those employees who don't meet standards the burden of daily tasks is lifted on every other employee in that department. Eighty to ninety percent of business theft is internal (Paranoid 432). Through the use of employee monitoring, the amount of money lost to theft can be dramatically reduced. While electronic monitoring in the work place may contribute to employee stress, the benefits are far greater then the disadvantages. Not only do companies save money from employee theft, sabotage, and vandalism, employees can feel more confident that their coworkers who don't pull their own weight will be terminated. When the company and the employees both benefit from increased profits I would call this a win-win situation. If the savings are passed to the customer, you could even have a win-win-win situation. Works Cited CQ Researcher. "Privacy in the Workplace." Writing and Reading Across the Curriculum. Ed. Laurence Behrens and Leonard Rosen. 6th ed. New York: HarperCollins, 1997. 441-445. Ternipsede, Harriet. "Is Electronic Monitoring of Workers Really Necessary?" Writing and Reading Across the Curriculum. Ed. Laurence Behrens and Leonard Rosen. 6th ed. New York: HarperCollins, 1997. 446-448. Whalen, John. "You're Not Paranoid: They Really Are Watching You." Writing and Reading Across the Curriculum. Ed. Laurence Behrens and Leonard Rosen. 6th ed. New York: HarperCollins, 1997. 430-440. f:\12000 essays\technology & computers (295)\Employment Skills.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Thiru Thirunavukarasu April. 1, 1996 Employment Skills By: Thiru Thirunavukarasu Introduction In my essay I will talk about the skills required to get a good job nowadays. There will be three main points I will be discussing such as academic, personal management, and teamwork skills. I will give you examples of these skills, and reasons why this skill is important for you to get a job. Academic Skills Academic skills are probably the most important skill you will need to get a job. It is one of the or the first thing an employer looks for in an employee. They are skills which give you the basic foundation to acquire, hold on to, and advance in a job, and to achieve the best results. Academic skills can be further divided into three sub-groups; communication, thinking, and learning skills. Thiru Thirunavukarasu April 1, 1996 Employment Skills-2 Communicate. Communication skills require you to understand and speak the languages in which business is conducted. You must be a good listener, and be able to understand things easily. One of the most important communicating skills would be reading, you should be able to comprehend and use written materials including things such as graphs, charts, and displays. One of the newest things we can add to communicating skills would be the Internet, since it is so widely used all around the world - you should have a good understanding of what it is and how to use it. Think. Thinking critically and acting logically to evaluate situations will get you far in your job. Thinking skills consists of things such as solving mathematical problems, using new technology, instruments, tools, and information systems effectively. Some examples of these would be technology, physical science, the arts, skilled trades, social science, and much more. Learn. Learning is very important for any job. For example, if your company, gets some new software, you must be able to learn how to use it, Thiru Thirunavukarasu April 1, 1996 Employment Skills-3 quickly and effectively after a few tutorials. You must continue doing this for the rest of your career. It is one thing that will always be useful in any situation, not just jobs. Personal Management Skills Personal management skills is the combination of attitudes, skills, and behaviors required to get, keep, and progress on a job and to achieve the best results. Personal management skills can be further divided into three sub-groups just as academic skills, which are positive attitudes and behaviors, responsibility, and adaptability. Positive Attitudes And Behaviors. This is also very important to keep a job. You must have good self-esteem and confidence in yourself. You must be honest, have integrity, and personal ethnics. You must show your employer you are happy at what you are doing and have positive attitudes toward learning, growth, and personal health. Show energy, and persistence to get the job done, these can help you to get promoted or a raise. Thiru Thirunavukarasu April 1, 1996 Employment Skills-4 Responsibility. Responsibility is the ability to set goals and priorities in work an personal life. It is the ability to plan an manage time, money, and other resources to achieve goals, and accountability for actions taken. Adaptability. Have a positive attitude toward changes in your job. Recognition of an respect for people's diversity and individual differences. Creativity is also important. You must have the ability to identify and suggest new ideas to get the job done. Teamwork Skills Teamwork skills are those skills needed to work with others co- operatively on a job and to achieve the best results. You should show your employer you able to work with others, understand and contribute to the organization's goals. Involve yourself in the group, make good decisions with others and support the outcomes. Don't be narrow minded, listen to what others have to say and give your thoughts towards their comments. Be a leader not a loner in the group. Thiru Thirunavukarasu April 1, 1996 Employment Skills-5 Conclusion In conclusion I would like to say that all these skills I have discussed are critical to get, keep, and progress in a job and to achieve the best results possible for you. Of these skills though academic skills would be the most important skills you will learn, I think. So if you keep at these skills you will be happy with what you are doing unlike a lot of people who are forced to get jobs that they do not like. f:\12000 essays\technology & computers (295)\Escapism and Virtual Reality.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Escapism and Virtual Reality ABSTRACT The use of computers in society provides obvious benefits and some drawbacks. `Virtual Reality', a new method of interacting with any computer, is presented and its advantages and disadvantages are considered. The human aspect of computing and computers as a form of escapism are developed, with especial reference to possible future technological developments. The consequences of a weakening of the sense of reality based upon the physical world are also considered. Finally, some ways to reduce the unpleasant aspects of this potential dislocation are examined. A glossary of computing terms is also included. Computers as Machines The progression of the machine into all aspects of human life has continued unabated since the medieval watchmakers of Europe and the Renaissance study of science that followed Clocks . Whilst this change has been exceedingly rapid from a historical perspective, it can nevertheless be divided into distinct periods, though rather arbitrarily, by some criteria such as how people travelled or how information was transferred over long distances. However these periods are defined, their lengths have become increasingly shorter, with each new technological breakthrough now taking less than ten years to become accepted (recent examples include facsimile machines, video recorders and microwave ovens). One of the most recent, and hence most rapidly absorbed periods, has been that of the computer. The Age of Computing began with Charles Babbage in the late 19th century Babbage , grew in the calculating machines between the wars EarlyIBM , continued during the cryptanalysis efforts of World War II Turing,Bletchley and finally blossomed in the late 1970's with mass market applications in the developed countries (e.g. JapanSord ). Computers have gone through several `generations' of development in the last fifty years and their rate of change fits neatly to exponential curves Graphs , suggesting that the length of each generation will become shorter and shorter, decreasing until some unforeseen limit is reached. This pattern agrees with the more general decrease of length between other technological periods. The great strength of computers whether viewed as complex machines, or more abstractly as merely another type of tool, lies in their enormous flexibility. This flexibility is designed into a computer from the moment of its conception and accounts for much of the remarkable complexity that is inherent in each design. For this very reason, the uses of computers are now too many to ever consider listing exhaustively and so only a representative selection are considered below. Computers are now used to control any other machine that is subject to a varying environment, (e.g. washing machines, electric drills and car engines). Artificial environments such as hotels, offices and homes are maintained in pre-determined states of comfort by computers in the thermostats and lighting circuits. Within a high street shop or major business, every financial or stockkeeping transaction will be recorded and acknowledged using some form of computer. The small number of applications suggested above are so common to our experiences in developed countries that we rarely consider the element which permits them to function as a computer. The word `microprocessor' is used to refer to a `stand-alone' computer that operates within these sorts of applications. Microprocessors are chips at the heart of every computer, but without the ability to modify the way they are configured, only a tiny proportion of their flexibility is actually used. The word `computer' is now defined as machines with a microprocessor, a keyboard and a visual display unit (VDU), which permit modification by the user of the way that the microprocessor is used. Computers in this sense are used to handle more complex information than that with which microprocessors deal, for example, text, pictures and large amounts of information in databases. They are almost as widespread as the microprocessors described above, having displaced the typewriter as the standard writing tool in many offices and supplanted company books as the most reliably current form of accountancy information. In both these examples, a computer permits a larger amount of information to be stored and modified in a less time-consuming fashion than any other method used previously. Another less often considered application is that of communication. Telephone networks are today controlled almost entirely by computers, unseen by the customer, but actively involved in every telephone call phones . The linking of computers themselves by telephone and other networks has led people to communicate with each other by using the computer to both write the text (a word-processor) and to send it to its destination. This is known as electronic mail, or `email'. The all pervasive nature of the computer and its obvious benefits have not prevented a growing number of people who are vociferously concerned with the risks of widespread application of what is still an undeniably novel technology comp.risks,ACMrisks . Far from being reactionary prophets of doom, such people are often employed within the computer industry itself and yet have become wary of the pace of change. They are not opposed to the use of computers in appropriate environments, but worry deeply when critical areas of inherently dangerous operations are performed entirely by computers. Examples of such operations include correctly delivering small but regular doses of drugs into a human body and automatically correcting (and hence preventing) aerodynamic stability problems in an aircraft plane1,plane2 . Both operations are typical `risky' environments for a computer since they contain elements that are tedious (and therefore error-prone) for a human being to perform, yet require the human capacity to intervene rapidly when the unexpected occurs. Another instance of the application of computers to a problem actually increasing the risks attached is the gathering of statistical information about patients in a hospital. Whilst the overall information about standards of health care is relatively insensitive, the comparative costs of treatment by different physicians is obviously highly sensitive information. Restricting the `flow 'of such information is a complex and time-consuming business. Predictions for future developments in computing applications are notoriously difficult to cast with any accuracy, since the technology which is driving the developments changes so rapidly. Interestingly, much of what has been developed so far has its conceptual roots in science fiction stories of the late 1950's. Pocket televisions, lightning fast calculating machines and weapons of pin-point accuracy were all first considered in fanciful fiction. Whilst such a source of fruitful ideas has yet to be fully mined out, and indeed, Virtual Reality (see below) has been used extensively Neuromancer and others, many more concepts that are now appearing that have no fictional precursors. Some such future concepts, in which computers would be of vital importance, might be the performance of delicate surgical procedures by robot, controlled by a computer, guided in turn by a human surgeon; the control of the flow of traffic in a large city according to information gathered by remote sensors; prediction of earthquakes and national weather changes using large computers to simulate likely progressions from a known current state weather ; the development of cheap, fast and secure coding machines to permit guaranteed security in international communications; automatic translation from one language to another as quickly as the words are spoken; the simulation of new drugs' chemical reactions with the human body. These are a small fraction of the possible future applications of computers, taken from a recent prediction of likely developments JapanFuture . One current development which has relevance to all the above, is the concept known as `Virtual Reality' and is discussed further below. Virtual Reality Virtual Reality, or VR, is a concept that was first formally proposed in the early Seventies by Ted Nelson ComputerDreams , though this work appears to be in part a summary of the current thinking at that time. The basic idea is that human beings should design machines that can be operated in a manner that is as natural as possible, for the human beings, not the computers. For instance, the standard QWERTY keyboard is a moderately good instrument for entering exactly the letters which have been chosen to make up a word and hence to construct sentences. Human communication, however, is often most fluent in speech, and so a computer that could understand spoken words (preferably of all languages) and display them in a standard format such as printed characters, would be far easier to use, especially since the skills of speech exist from an early age, but typing has to be learnt, often painfully. All other human senses have similar analogies when considering their use with tools. Pictures are easier than words for us to digest quickly. A full range of sounds provides more useful information than beeps and bells do. It is easier to point at an item that we can see than to specify it by name. All of these ideas had to wait until the technology had advanced sufficiently to permit their implementation in an efficient manner, that is, both fast enough not to irritate the user and cheap enough for mass production. The `state of the art' in VR consists of the following. A pair of rather bulky goggles, which when worn display two images of a computer-generated picture. The two images differ slightly, one for each eye, and provide stereo vision and hence a sense of depth. They change at least fifty times per second, providing the brain with the illusion of continuous motion (just as with television). Attached to the goggles are a pair of conventional high-quality headphones, fed from a computer-generated sound source. Different delays in the same sound reaching each ear provide a sense of aural depth. There is also a pair of cumbersome gloves, rather like padded ice-hockey gloves, which permit limited flexing in all natural directions and feed information about the current position of each hand and finger to a computer. All information from the VR equipment is passed to the controlling computer and, most importantly, all information perceived by the user is generated by the computer. The last distinction is the essence of the reality that is `virtual', or computer-created, in VR. The second critical feature is that the computer should be able to modify the information sent to the user according to the information that it received from the user. In a typical situation this might involve drawing a picture of a room on the screens in the goggles and superimposing upon it a picture of a hand, which moves and changes shape just as the user's hand moves and changes shape. Thus, the user moves his hand and sees something that looks like a hand move in front of him. The power of VR again lies in the flexibility of the computer. Since the picture that is displayed need not be a hand, but could in fact be any created object at all, one of the first uses of VR might be to permit complex objects to be manipulated on the screen as though they existed in a tangible form. Representations of large molecules might be grasped, examined from all sides and fitted to other molecules. A building could be constructed from virtual architectural components and then lit from differing angles to consider how different rooms are illuminated. It could even be populated with imaginary occupants and the human traffic bottlenecks displayed as `hot spots' within the building. One long-standing area of interest in VR has been the simulation of military conflicts in the most realistic form possible. The flight simulator trainers of the 1970's had basic visual displays and large hydraulic rams to actually move the trainee pilot as the real aeroplane would have moved. This has been largely replaced in more modern simulators by a massive increase in the amount of information displayed on the screen, leading to the mind convincing itself that the physical movements are occurring, with reduced emphasis on attempts to provide the actual movements. Such an approach is both cheaper in equipment and more flexible in configuration, since changing the the aeroplane from a fighter to a commercial airliner need only involve changing the simulator's program, not the hydraulics. Escapism Escapism can be rather loosely defined as the desire to be in a more pleasant mental and physical state than the present one. It is universal to human experience across all cultures, ages and also across historical periods. Perhaps for this reason, little quantitative data exists on how much time is spent practicing some form of escapism and only speculation as to why it should feel so important to be able to do so. One line of thought would suggest that all conscious thought is a form of escapism and that in fact any activity that involves concentration on sensations from the external world is a denial of our ability to escape completely. This hypothesis might imply that all thought is practice, in some sense, for situations that might occur in the future. Thoughts about the past are only of use for extrapolation into possible future scenarios. However, this hypothesis fails to include the pleasurable parts of escapist thinking, which may either be recalling past experiences or, more importantly for this study, the sense of security and safety that can exist within situations that exist only in our minds. A more general hypothesis would note the separate concepts of pleasure and necessity as equally valid reasons for any thought. Can particular traits in a person's character be identified with a tendency to escapist thoughts that lead to patterns of behaviour that are considered extreme by their society? It seems unlikely that a combination of hereditary intelligence and social or emotional deprivation can be the only causes of such behaviour, but they are certainly not unusual ones, judging by the common stereotypes of such people. The line of thinking that will be pursued throughout this essay is the idea that a person who enjoys extreme forms of escapist thoughts will often feel most comfortable with machines in general and with computers in particular. Certainly, excessive escapist tendencies have existed in all societies and have been tolerated or more crucially, made use of, in many different ways. For instance, apparent absent-mindedness would be acceptable in a hunter/gatherer society in the gatherers but not for a hunter. A society with a wide-spread network of bartering would value a combination of both the ability to plan a large exchange and the interpersonal skills necessary to conclude a barter, which are not particularly abstract. In a society with complex military struggles, the need to plan and imagine victories becomes an essential skill (for a fraction of the combatants). Moving from the need for abstract thought to its use, there is a scale of thought required to use the various levels of machines that have been mentioned earlier. A tool that has no electronics usually has a function that is easy to perceive (for example, a paperclip). A machine with a microprocessor often has a larger range of possible uses and may require an instruction manual telling the operator how to use it (e.g. a modern washing machine or a television). Both of these examples can be used without abstract thought, merely trusting that they will do what they either obviously do, or have been assured by the manual that they will do. The next level is the use of computers as tools, for example, for word-processing. Now a manual becomes essential and some time will have to be spent before use of the tool is habitual. Even then, many operations will remain difficult and require some while to consider how to perform them. A `feel' for the tool has to acquired before it can be used effectively. The top level of complexity on this scale is the use of computers as flexible tools and the construction of the series of instructions known as programs to control the operation of the computer. Escapist thoughts begin when the operations of the programs have to be understood. In many cases, it is either too risky or time-consuming to set the programs into action without considering their likely consequences (in minute detail) first. Such detailed comprehension of the action of a program often requires the person constructing the lists of instructions (the programmer) to enter a separate world, where the symbols and values of the program have their physical counterparts. Variables take on emotional significance and routines have their purpose described in graphic `action' language. A cursory examination of most programmers' programs will reveal this in the comments that are left to help them understand each program's purpose. Interestingly, even apparently unemotional people visualise their programs in this anthropomorphic manner Weizenbaum76,Catt73 . Without this ability to trace the action of a program before it is performed in real life, the computing industry would cease to exist. This ability is so closely related to what we do naturally and call `escapism', that the two have begun to merge for many people involved in the construction of programs. For some, what began as work has become what is done for pleasurable relaxation, which is a fortunate discovery for large computer-related businesses. The need for time-clocks and foremen has been largely eliminated, since the workers look forward to coming to work, often to escape the mundane aspect of reality. There are problems associated with this form of work motivation. One major discovery is that it can be difficult to work as a team in this kind of activity. Assigning each programmer a section of the project is the usual solution, but maintaining a coherent grasp of the project's state then becomes increasingly difficult. Indeed, this problem means that there are now computers whose design cannot be completely understood by one person MMMonth . Misunderstandings that result from this problem and the inherent ambiguities of human languages are often the cause of long delays in completion of projects involving computers. (The current statistics are that cost over-runs of 300 are not uncommon, especially for larger projects and time over-runs of 50 are common SWEng ). Another common problem is that of developed social inadequacy amongst groups of programmers and their businesses. The awkwardness of communicating complex ideas to other (especially non-technical) members of the group can lead them to avoid other people in person and to communicate solely by messages and manuals (whether electronic or paper). Up to now, most absorption of the information necessary to `escape' in this fashion has been from a small number of sources located in an environment full of other distractions. The introduction of Virtual Reality, especially with regard to the construction of programs, will eliminate many of these external distractions. In return, it will provide a `concentrated' version of the world in which the programmer is working. The flexible nature of VR means that abstract objects such as programs can be viewed in reality (on the goggles' screens) in any format at all. Most likely, they will be viewed in a manner that is significant for each individual programmer, corresponding to how he or she views programs when they have escaped into the world that contains them. Thus, what were originally only abstract thoughts in one human mind can now be made real and repeatable and may be distributed in a form that has meaning for other people. The difference between this and books or paintings is the amount of information that can be conveyed and the flexibility with which it can be constructed. The Dangers of Virtual Reality As implied above, the uses of Virtual Reality can be understood in two ways. Firstly, VR can be viewed as a more effective way of communicating concepts, abstract or concrete, to other people. For example, as a teaching tool, a VR interface to a database of operation techniques would permit a surgeon to try out different approaches on the same simulated patient or to teach a junior basic techniques. An architect might use a VR interface to allow clients to walk around a building that exists only in the design stage ArchieMag . Secondly, VR can be used as a visualisation tool for each individual. Our own preferences could be added to a VR system to such an extent that anyone else using it would be baffled by the range of personalised symbols and concepts. An analogy to this would be redefining all the keys on a typewriter for each typist. This would be a direct extension of our ability to conceive objects, since the machine would deal with much of the tedious notation and the many symbols currently necessary in complex subjects such as nuclear physics. In this form, VR would provide artificial support for a human mind's native abilities of construct building and imagination. It is the second view of VR, and derivations from it, that are of concern to many experts. On a smaller scale, the artificial support of mental activities has shown that once support is available, the mind tends to become lazy about developing what is already present. The classic case of this is, of course, electronic calculators. The basic tedious arithmetic that is necessary to solve a complicated problem in physics or mathematics is the same whether performed by machine or human, and in fact plays very little part in understanding (or discovering) the concepts that lie behind the problem. However, if the ability to perform basic arithmetic at the lowest level is neglected, then the ability to cope with more complex problems does seem to be impaired in some fashion. Another example is the ability to spell words correctly. A mis-spelt word only rarely alters the semantic content of a piece of writing, yet obvious idleness or inability in correct use of the small words used to construct larger concepts often leaves the reader with a sense of unease as to the validity of the larger concept. Extending the examples, a worrying prediction is that the extensive use of VR to support our own internal visualisations of concepts would reduce our ability to perform abstract and escapist thoughts without the machine's presence. This would be evident in a massive upsurge in computer-related entertainment, both in games and interactive entertainment and would be accompanied by a reduction of the appreciation and study of written literature, since the effort required to imagine the contents would be more than was considered now reasonable. Another danger of VR is its potential medical applications. If a convincing set of images and sound can be collected, it might become possible to treat victims of trauma or brain-injured people by providing a `safe' VR environment for them to recover in. As noted Whalley , there are several difficult ethical decisions associated with this sort of work. Firstly, the decision to disconnect a chronically disturbed patient from VR would become analogous to removing pain-killers from a patient in chronic pain. Another problem is that since much of what we perceive as ourselves is due to the way that we react to stimuli, whatever the VR creator defines as the available stimuli become the limiting extent of our reactions. Our individuality would be reduced and our innate human flexibility with it. To quote Whalley Whalley directly, quote `` virtual reality devices may possess the potential to distort substantially [those] patients' own perceptions of themselves and how others see them. Such distortions may persist and may not necessarily be universally welcomed. In our present ignorance about the lasting effects of these devices, it is certainly impossible to advise anyone, not only mental patients, of the likely hazards of their use." quote Following on from these thoughts, one can imagine many other abuses of VR. `Mental anaesthesia' or `permanent calming' could be used to control long-term inmates of mental institutions. A horrendous form of torture by deprivation of reality could be imagined, with a victim being forced to perceive only what the torturers choose as reality. Users who experienced VR at work as a tool may chose to use it as a recreational drug, much as television is sometimes used today, and just as foreseen in the `feelies' of Aldous Huxley's Brave New World BNW . Conclusions Computers are now an accepted part of many peoples' working lives and yet still retain an aura of mystery for many who use them. Perhaps the commonest misapprehension is to perceive them as an inflexible tool; once a machine is viewed as a word processor, it can be awkward to have to redefine it in our minds as a database, full of information ordered in a different fashion. Some of what people find difficult to use about today's machines will hopefully be alleviated by the introduction of Virtual Reality interfaces. These should allow us to deal with computers in a more intuitive manner. If there ever comes a time when it is necessary to construct a list of tests to distinguish VR from reality, some of the following observations might be of use. The most difficult sense to deceive over a long period of time will probably be that of vision. The part of the human brain that deals with vision processing uses depth of focus as one of its mechanisms to interpret distances. Flat screens cannot provide this without a massive amount of processing to deliberately bring the object that the eyes are focussed upon into a sharper relief than its surroundings. Since this is unlikely to be economical in the near future, the uniform appearance of VR will remain an indication of its falsehood. Another sign may be the lack of tactile feedback all over the body. Whilst most tactile information, such as the sensation of wearing a watch on one's wrist, is ignored by the brain, a conscious effort of detection will usually reveal its presence. Even the most sophisticated feedback mechanisms will be hard-pressed to duplicate such sensations or the exact sensations of an egg being crushed or walking barefoot on pebbles, for example. The sense of smell may prove to be yet another tell-tale sign of reality. The human sense of smell is so subtle (compared to our present ability to recreate odours) and is interpreted constantly, though we are often unaware of it, that to mimic the myriad smells of life may be too complex to ever achieve convincingly. The computer industry will continue to depend upon employees who satisfy some part of their escapist needs by programming for pleasure. In the near future, the need for increased efficiency and better estimates of the duration of projects may demand that those who spend their hours escaping are organised by those who do not. This would lead to yet another form of stratification within a society, namely, the dreamers (who are in fact now the direct labour force) and their `minders'. It should also encourage societies to value the power of abstract thought more highly, since direct reward will be seen to come from it. Virtual Reality is yet another significant shift in the way that we can understand both what is around us and what exists only in our minds. A considerable risk associated with VR is that our flexibility as human beings means that we may adapt our thoughts to our tool, instead of the other way round. Though computers and our interaction with them by VR is highly flexible, this flexibility is as nothing compared to the potential human range of actions. Acknowledgements: My thanks go to Glenford Mapp of Cambridge University Computer Laboratory and Olivetti Research Laboratory, Dr. Alan Macfarlane of the Department of Social Anthropology, Cambridge University, Dr. John Doar and Alan Finch for many useful discussions. Their comments have been fertile starting grounds for many of the above ideas. This essay contains approximately 4,500 words, excluding Abstract, Glossary and Bibliography. Glossary Chip for microchip, the small black tile-like objects that make electronic machines. Computer machine with a microprocessor and an interface that permits by the user. Database collection of information stored on a computer which permits. to the information in several ways, rather like having multiple in a book. Email mail. Text typed into one machine can be transferred to another remote machine. Microprocessor stand-alone computer, with little option for change by the user. Program series of instructions to control the operation of a microprocessor. Risk often unforeseen dangers of applying computer-related technology new applications. Stand-alone to the rest of the electronic world. User human who uses the machine or computer. VDU Display Unit. The television-like screen attached to a computer. Virtual to mean `imaginary' or `existing only inside a computer' VR Reality. Loosely, an interface to any computer that the user to use the computer in a more `involved' fashion. Word processor application of a computer to editing and printing text. Clocks L. Mumford, Technics and Civilisation , Harcourt Brace Jovanovich, New York, 1963, pp.13--15. Babbage J.M. Dubbey, The Mathematical Work of Charles Babbage , Cambridge University Press, 1978. EarlyIBM William Aspray, Computing Before Computers , Iowa State University press, 1990. Turing B.E. Carpenter and R.W. Doras (Editors), A.M. Turing's ACE report of 1946 and other papers , The MIT Press, 1980. Bletchley David Kahn, The Codebreakers , London, Sphere, 1978 JapanSord Takeo Miyauchi, The Flame from Japan , SORD Computer Systems Inc., 1982. Graphs J.L. Hennessy and D.A. Patterson, Computer Architecture : A Quantitative Approach , Morgan Kaufmann, California, 1990. phones Amos E. Joel, Electronic Switching : Digital Central Office Systems of the World , Wiley, 1982. comp.risks comp.risks , a moderated bulletin board available world-wide on computer networks. Its purpose is the discussion of computer-related risks. f:\12000 essays\technology & computers (295)\Essay On Hacking.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Essay On HAcking by Philip smith A topic that i know very well is computers and computer hacking. Computers seem very complicated and very hard to learn, but, if given time a computer can be very useful and very fun. Have you ever heard all of that weird computer terminology? for and example, Javascript. Javascript is basically a computer language used when programming internet web pages. Have you ever been on the internet and seen words go across the screen or moving images? This is all done by the Java language. If you don not see moving images then its because your web browser cannot read javascript. If you don't know what a web browser is then I will tell you, a web browser is simply a tool used to view the various websites on the internet. All web browsers are different, some only interpret html language, which is another programming language used to design web page and then there are some browsers that can play videos and sounds. Have you ever wondered why when you want to go to a website you have to type http://name of site.com? well I have been wondering for ages but still can't figure out, but sometimes you type ftp:// before the name of the site. This simply means File transfer protocol. You use this when download image files or any other files. Now, onto hacking. Most people stereotype people simply as "HACKERS," but what they don't know is that there are three different types of computer whizzes. First, there are hackers. Hackers simply make viruses and fool around on the internet and try to bug people. They make viruses so simple. The get a program called a virus creation kit. This program simply makes the virus of beholders choice. It can make viruses that simply put a constant beep in you computer speakers or it can be disastrous and ruin your computers hard-drive. Hackers also go onto chat rooms and cause trouble. Chat rooms are simply a service given by internet providers to allow people all over the world to talk. As I was saying, Hackers go into these rooms and basically try to take over because in chat rooms there is one person in control. This person has the ability to put you in control or simply ban you. These hackers use programs that allow them to take full control over any room and potentially, make the computers on the other side overload with commands which in end, makes their computer collapse. Another type of computer whiz is called a cracker, crackers are sort of malicious. Crackers use security programs used by system operators for evil purposes. System operators use these programs to search the net for any problems, but they can be used for other purposes. When Crackers get into systems they usually just fool around but never destroy things. The last computer whiz is called a phreaker. Don't let the name fool you, phreakers are very malicious and will destroy any information found when breaking into a system. The phreakers use the same techniques as crackers but they go a step further. When into systems, phreakers usually plant viruses and steal information. Now that you know some important things about computers and the internet it will take you no time to surf the web. But remember, never get into hacking, cracking and phreaking because no matter how much you know about computers you should never use it for malicious purposes. f:\12000 essays\technology & computers (295)\ethics in cyberspace.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Cyberspace is a global community of people using computers in networks. In order to function well, the virtual communities supported by the Internet depend upon rules of conduct, the same as any society. Librarians and information technologists must be knowledgeable about ethical issues for the welfare of their organizations and to protect and advise users. What is ethics? Ethics is the art of determining what is right or good. It can also be defined as a general pattern or way of life, a set of rules of conduct or moral code. Ethical guidelines are based on values. The Association of Computing Machinery (ACM) is one national organization which has developed a statement of its values. Every member of ACM is expected to uphold the Code of Ethics and Professional Conduct which includes these general moral imperatives: 1) contribute to society and human well-being 2) avoid harm to others 3) be honest and trustworthy 4) be fair and take action not to discriminate 5) honor property rights including copyrights and patents 6) give proper credit for intellectual property 7) respect the privacy of others 8) honor confidentiality. The very nature of electronic communication raises new moral issues. Individuals and organizations should be proactive in examining these concerns and developing policies which protect liabilities. Issues which need to be addressed include: privacy of mail, personal identities, access and control of the network, pornographic or unwanted messages, copyright, and commercial uses of the network. An Acceptable Use Policy (AUP) is recommended as the way an organization should inform users of expectations and responsibilities. Sample AUPs are available on the Internet at gopher sites and can be retrieved by using Veronica to search keywords "acceptable use policies" or "ethics." The Computer Ethics Institute in Washington, D.C. has developed a "Ten Commandments of Computing": 1) Thou shalt not use a computer to harm other people. 2) Thou shalt not interfere with other people's computer work. 3) Thou shalt not snoop around in other people's computer files. 4) Thou shalt not use a computer to steal. 5) Thou shalt not use a computer to bear false witness. 6) Thou shalt not copy or use proprietary software for which you have not paid. 7) Thou shalt not use other people's computer resources without authorization or proper compensation. 8) Thou shalt not appropriate other people's intellectual output. 9) Though shalt think about the social consequences of the program you are writing or the system you are designing. 10) Thou shalt always use a computer in ways that show consideration and respect for your fellow humans (Washington Post, 15 June 1992: WB3). The University of Southern California Network Ethics Statement specifically identifies types of network misconduct which are forbidden: intentionally disrupting network traffic or crashing the network and connected systems; commercial or fraudulent use of university computing resources; theft of data, equipment, or intellectual property; unauthorized access of others' files; disruptive or destructive behavior in public user rooms; and forgery of electronic mail messages. What should an organization do when an ethical crisis occurs? One strategy has been proposed by Ouellette and Associates Consulting (Rifkin, Computerworld 25, 14 Oct. 1991: 84). 1. Specify the FACTS of the situation. 2. Define the moral DILEMMA. 3. Identify the CONSTITUENCIES and their interests. 4. Clarify and prioritize the VALUES and PRINCIPLES at stake. 5. Formulate your OPTIONS. 6. Identify the potential CONSEQUENCES. Other ethical concerns include issues such as 1) Influence: Who determines organizational policy? Who is liable in the event of lawsuit? What is the role of the computer center or the library in relation to the parent organization in setting policy? 2) Integrity: Who is responsible for data integrity? How much effort is made to ensure that integrity? 3) Privacy: How is personal information collected, used and protected? How is corporate information transmitted and protected? Who should have access to what? 3) Impact: What are the consequences on staff in the up- or down-skilling of jobs? What are the effects on staff and organizational climate when computers are used for surveillance, monitoring and measuring? As the schools incorporate Internet resources and services into the curriculum and the number of children using the Internet increases, other ethical issues must be addressed. Should children be allowed to roam cyberspace without restriction or supervision? How should schools handle student Internet accounts? What guidelines are reasonable for children? Organizations need to be proactive in identifying and discussing the ethical ramifications of Internet access. By having acceptable use policies and expecting responsible behavior, organizations can contribute to keeping cyberspace safe. Selected Resources on Information Ethics "Computer Ethics Statement." College & Research Libraries News 54, no. 6 (June 1993): 331-332. Dilemmas in Ethical Uses of Information Project. "The Ethics Kit." EDUCOM/EUIT, 1112 16th Street, NW, Suite 600, Washington, D.C. 20036. phone: (202) 872-4200; fax: (202) 872-4318; e-mail: ethics@bitnic.educom.edu "Electronic Communications Privacy Act of 1986." P.L. 99-508. Approved Oct. 21, 1986. [5, sec 2703] Feinberg, Andrew. "Netiquette." Lotus 6, no. 9 (1990): 66-69. Goode, Joanne and Maggie Johnson. "Putting Out the Flames: The Etiquette and Law of E-Mail." ONLINE 61 (Nov. 1991): 61-65. Gotterbarn, Donald. "Computer Ethics: Responsibility Regained." National Forum 71, no. 3 (Summer 1991): 26-31. Hauptman, Robert, ed. "Ethics and the Dissemination of Information." Library Trends 40, no. 2 (Fall 1991): 199- 375. Johnson, Deborah G. "Computers and Ethics." National Forum 71, no. 3 (Summer 1991): 15-17. Journal of Information Ethics (1061-9321). McFarland, 1992- Kapor, M. "Civil Liberties in Cyberspace." Scientific American 265, no. 3 (1991): 158-164. Research Center on Computing and Society, Southern Connecticut State University and Educational Media Resources, Inc. "Starter Kit." phone: (203) 397-4423; fax: (203-397-4681; e-mail: rccs @csu.ctstate.edu Rifkin, Glenn. "The Ethics Gap." Computerworld 25, no. 41 (14 Oct. 1991): 83-85. Shapiro, Normal and Robert Anderson. "Toward an Ethics and Etiquette for Electronic Mail." Santa Monica, Calif.: Rand Corporation, 1985. Available as Rand Document R-3283-NSF/RC and ERIC Document ED 169 003. Using Software: A Guide to the Ethical and Legal Use of Software for Members of the Academic Community. EDUCOM and ITAA, 1992. Welsh, Greg. "Developing Policies for Campus Network Communications." EDUCOM Review 27, no. 3 (May/June 1992): 42-45). f:\12000 essays\technology & computers (295)\Feasibility of complete system protection .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Jackman 1 Computer Viruses: Infection Vectors, and Feasibility of Complete Protection A computer virus is a program which, after being loaded into a computer's memory, copies itself with the purpose of spreading to other computers. Most people, from the corporate level power programmer down to the computer hobbyist, have had either personal experience with a virus or know someone who has. And the rate of infection is rising monthly. This has caused a wide spread interest in viruses and what can be done to protect the data now entrusted to the computer systems throughout the world. A virus can gain access to a computer system via any one of four vectors: 1. Disk usage: in this case, infected files contained on a diskette (including, on occasion, diskettes supplied by software manufacturers) are loaded, and used in a previously uninfected system thus allowing the virus to spread. 2. Local Area Network: a LAN allows multiple computers to share the same data, and programs. However, this data sharing can allow a virus to spread rapidly to computers that have otherwise been protected from external contamination. Jackman 2 3. Telecommunications: also known as a Wide Area Network, this entails the connection of computer systems to each other via modems, and telephone lines. This is the vector most feared by computer users, with infected files being rapidly passed along the emerging information super-highway, then downloaded from public services and then used, thus infecting the new system. 4. Spontaneous Generation: this last vector is at the same time the least thought of and the least likely. However, because virus programs tend to be small, the possibility exists that the code necessary for a self-replicating program could be randomly generated and executed in the normal operation of any computer system. Even disregarding the fourth infection vector, it can be seen that the only way to completely protect a computer system is to isolate it from all contact with the outside world. This would include the user programming all of the necessary code to operate the system, as even commercial products have been known to be shipped already infected with viruses. In conclusion, because a virus can enter a computer in so many different ways, perhaps the best thing to do is more a form of damage control rather than prevention. Such as, maintain current backups of your data, keep your original software disks write-protected and away from the computer, and use a good Virus detection program. Jackman 3 Sources Cited Burger, Ralf. Computer Viruses and Data Protection. Grand Rapids: Abacus, 1991. Fites, Philip, Peter Johnston, and Martin Kratz. The Computer Virus Crisis. New York: Van Nostrand Reinhold, 1989: 6-81. McAfee, John, and Colin Haynes. Computer Viruses, Worms, Data Diddlers, Killer Programs, and Other Threats to Your System. New York: St. Martin's Press, 1989: i-195. Roberts, Ralph. Compute!'s Computer Viruses. Greensboro: Compute! Publications, Inc., 1988: 29-82 Jackman ii Outline Thesis: Complete protection of a computer system from viruses is not possible, so efforts should be concentrated on recovery rather than prevention. I. Introduction, with definition. A. Define Computer Virus. B. Define interest group. C. Define problem. II. Discus the ways that a virus can infect a computer. A. Disk exchange and use. B. Local Area Network. C. Telecommunications also known as Wide Area Network. D. Spontaneous Generation. III. Summarize threat, and alternatives. A. Must isolate from outside world. B. Must write own programs. C. Propose alternative of damage control. f:\12000 essays\technology & computers (295)\Fiber Optics.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Fiber Optics Fiber Optic Cable Facts "A relatively new technology with vast potential importance, fiber optics is the channeled transmission of light through hair-thin glass fibers." [ Less expensive than copper cables [ Raw material is silica sand [ Less expensive to maintain [ If damaged, restoration time is faster (although more users are affected) [ Backbone to the Information Superhighway Information (data and voice) is transmitted through the fiber digitally by the use of high speed LASERs (Light Amplification through the Simulated Emission of Radiation) or LEDs (Light Emitting Diodes). Each of these methods create a highly focused beam of light that is cycled on and off at very high speeds. Computers at the transmitting end convert data or voice into "bits" of information. The information is then sent through the fiber by the presence, or lack, of light. Computers on the receiving end convert the light back into data or voice, so it can be used. ORIGIN OF FIBER OPTICS Information (data and voice) is transmitted through the fiber digitally by the use of high speed LASERs (Light Amplification through the Simulated Emission of Radiation) or LEDs (Light Emitting Diodes). Each of these methods create a highly focused beam of light that is cycled on and off at very high speeds. Computers at the transmitting end convert data or voice into "bits" of information. The information is then sent through the fiber by the presence, or lack, of light. So, all of the data is sent light pulses. Computers on the receiving end convert the light back into data or voice, so it can be used. All of this seems to be a very "modern" concept, and the technology we use is. The concept though, was the idea of Alexander Graham Bell in the late 1800's. He just didn't have a dependable light source... some days the sun doesn't shine! He thought of the idea that our voices could be transmitted by pulses of light. The people who thought that audio, video, and other forms of data could be transmitted by light through cables, were present day scientists. Most of the things that are possible today, Alexander Grahm Bell could never even have dreamed of. Although the possibility of lightwave communications occurred to Alexander Graham Bell (who invented the telephone), his ideas couldn't be used until the LASER or LED had been invented. Most of these advances occurred in the 1970s, and by 1977 glass-purifying and other fiber-optic manufacturing techniques had also reached the stage where interoffice lightwave communications were possible. With further technological development, many intercity routes were in operation by 1985, and some transoceanic routes had been completed by 1990. Now, in the mid-90's, worldwide connections are possible through the Internet. The light is prevented from escaping the fiber by total internal reflection, a process that takes place when a light ray travels through a medium with an Index of Refraction higher than that of the medium surrounding it. Here the fiber core has a higher refractive index than the material around the core, and light hitting that material is reflected back into the core, where it continues to travel down the fiber. THE PROPAGATION OF LIGHT AND LOSS OF SIGNALS The glass fibers used in present-day fiber-optic systems are based on ultrapure fused silica (sand). Fiber made from ordinary glass is so dirty that impurities reduce signal intensity by a factor of one million in only about 16 ft of fiber. These impurities must be removed before useful long-haul fibers can be made. But even perfectly pure glass is not completely transparent. It weakens light in two ways. One, occurring at shorter wavelengths, is a scattering caused by unavoidable density changes within the fiber. In other words, when the light changes mediums, the change in density causes interference. The other is a longer wavelength absorption by atomic vibrations. For silica, the maximum transparency, occurs in wavelengths in the near infrared, at about 1.5 m (micrometers). APPLICATIONS Fiber-optic technology has been applied in many areas, although its greatest impact has come in the field of telecommunications, where optical fiber offers the ability to transmit audio, video, and data information as coded light pulses. Fiber optics are also used in the field of medicine, all of the wire-cameras and lights are forms of fiber optic cable. In fact, fiber optics have quickly become the preferred mode of transmitting communications of all kinds. Its advantages over older methods of transmitting data are many, and include greatly increased carrying capacity (due to the very high frequency of light), lower transmission losses, lower cost of basic materials, much smaller cable size, and almost complete immunity to any interference. Other applications include the simple transmission of light for illumination in awkward places, image guiding for remote viewing, and sensing. ADVANTAGES OF FIBER OPTIC CABLE This copper cable contains 3000 individual wires. It takes two wires to handle one two-way conversation. That means 1500 calls can be transmitted simultaneously on each cable. Each fiber optic cable contains twelve fiber wires. Two fibers will carry the same number of simultaneous conversations as one whole copper cable. Therefore, this fiber cables replace six of the larger ones. And 90,000 calls can be transmitted simultaneously on one fiber optic cable. LONG DISTANCE FIBER-OPTIC COMMUNICATIONS SYSTEMS AT&T's Northeast Corridor Network, which runs from Virginia to Massachusetts, uses fiber cables carrying more than 50 fiber pairs. Using a semiconductor LASER or a light-emitting diode (LED) as the light source, a transmitter codes the audio or visual input into a series of light pulses, called bits. These travel along a fiber at a bit-rate of 90 million bits per second (or 90 thousand kbps). Pulses need boosting, about every 6.2 miles, and finally reach a receiver, containing a semiconductor photodiode detector (light sensor), which amplifies, decodes, and regenerates the original audio or visual information. Silicon integrated circuits control and adjust both transmitter and receiver operations. THE FUTURE OF FIBER OPTICS Light injected into a fiber can adopt any of several zigzag paths, or modes. When a large number of modes are present they may overlap, for each mode has a different velocity along the fiber. Mode numbers decrease with decreasing fiber diameter and with a decreasing difference in refractive index between the fiber core and the surrounding area. Individual fiber production is quite practical, and today most high-capacity systems use single fibers. The present pace of technological advance remains impressive, with the fiber capacity of new systems doubling every 18 to 24 months. The newest systems operate at more than two billion bits per second per fiber pair. During the 1990s optical fiber technology is expected to extend to include both residential telephone and cable television service. Currently Bell South is placing fiber cables containing up to 216 fibers, and manufacturers are starting to build larger ones. Bell South has been placing fiber cables in the Orlando area since the early 1980s, and currently has hundreds of miles in service to business and residential customers. BIBLIOGRAPHY 1. 1995 Grolier Multimedia Encyclopedia, Grolier Electronic Publishing, Inc. 2. 1994 Compton's Interactive Encyclopedia, Compton's NewMedia. 3. Fiber Optics abd Lightwave Communications Standard Dictionary, Martin H. Weik, D.Sc., Van Nostrand Reinhold Company, New York, New York, 1981. 4. Fiber Optics and Laser Handbook, 2nd Edition, Edward L. Stafford, Jr. and John A. McCann, Tab Books, Inc., Blue Ridge Summit, Pennsylvania, 1988. 5.5. Fiber Optics and Optoelectronics, Second Edition, Peter K. Cheo, Prentice Hall, Englewood Cliffs, New Jersey, 1990. f:\12000 essays\technology & computers (295)\First generation of computers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The First Generation The first generation of computers, beginning around the end of World War 2, and continuing until around the year 1957, included computers that used vacuum tubes, drum memories, and programming in machine code. Computers at that time where mammoth machines that did not have the power our present day desktop microcomputers. In 1950, the first real-time, interactive computer was completed by a design team at MIT. The "Whirlwind Computer," as it was called, was a revamped U.S. Navy project for developing an aircraft simulator. The Whirlwind used a cathode ray tube and a light gun to provide interactively. The Whirlwind was linked to a series of radars and could identify unfriendly aircraft and direct interceptor fighters to their projected locations. It was to be the prototype for a network of computers and radar sites (SAGE) acting as an important element of U.S. air defense for a quarter-century after 1958. In 1951, the first commercially-available computer was delivered to the Bureau of the Census by the Eckert Mauchly Computer Corporation. The UNIVAC (Universal Automatic Computer) was the first computer which was not a one-of-a-kind laboratory instrument. The UNIVAC became a household word in 1952 when it was used on a televised newscast to project the winner of the Eisenhower-Stevenson presidential race with stunning accuracy. That same year Maurice V. Wilkes (developer of EDSAC) laid the foundation for the concepts of microprogramming, which was to become the guide for computer design and construction. In 1954, the first general-purpose computer to be completely transistorized was built at Bell Laboratories. TRADIC (Transistorized Airborne Digital Computer) held 800 transistors and bettered its predecessors by functioning well aboard airplanes. In 1956, the first system for storing files to be accessed randomly was completed. The RAMAC (Random-Access Method for Accounting and Control) 305 could access any of 50 magnetic disks. It was capable of storing 5 million characters, within a second. In 1962, the concept was expanded with research in replaceable disk packs. f:\12000 essays\technology & computers (295)\From the Abacus to the Mac.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ AnyView Professional 1.1 -- README.TXT -- 01/16/95 *************************************************************** Welcome to AnyView Professional! Whether you're a software developer, a graphic artist or a computer "novice," you're about to watch your computer perform in ways never before thought possible. *************************************************************** Table of Contents 1. Installation Notes 1a. System Requirements 1b. Installation on a Single-User System 1c. Quick Installation 1d. Custom Installation 1e. Network Client Station Installation 1f. Installation Using Microsoft Compliant OEM Setup Diskettes 2. Trouble Shooting 2a. Installation and reboot problems/errors 2b. AnyView driver not loaded 2c. Norton Desktop 3.0 2d. PC Tools 2e. Adobe Premiere 2f. BroderBund's Math Workshop 2g. The Learning Center's Reader Rabbit 2h. Number #9 GXE, GXE 64, GXE 64 Pro 2i. ATI Mach8 2j. ATI Mach32/64 2k. Cirrus Logic 5434 2l. Diamond Viper Pro 2m. Western Digital WD90C31 2n. Special notes regarding OEM provided display utilities 2o. Menu altering utilities 2p. Get-out-of-trouble Hot Keys 2q. Right Mouse Button Calls Toolbar 3. Technical notes on Color Depth Switching and Global Memory. 4. Changing Video Controllers and Reinstalling AnyView Professional. 5. Uninstalling AnyView Professional. *************************************************************** 1. INSTALLATION NOTES These are installation instructions for AnyView Professional. AnyView Professional may be installed over previous versions of itself. *************************************************************** *************************************************************** 1a. System Requirements *************************************************************** Before you install AnyView Professional, please confirm that your system is equipped with the following hardware and software: * An 80386 or better microprocessor (CPU), four megabytes or more of RAM (eight megabytes or more is recommended), and approximately two megabytes of hard disk space * MS-DOS or PC-DOS version 3.1 or later, Microsoft Windows version 3.1 or Windows for Workgroups 3.1 or 3.11, Enhanced Mode * You should have all of your Windows display drivers for all the resolutions and color depths that your video controller supports installed on your system before installing AnyView Professional. Also, the install may go more smoothly if you run Windows with your 256 color drivers when installing AnyView Professional. *************************************************************** 1b. Installation On a Single-User System *************************************************************** First, check AnyView Professional's requirements on the preceding page to ensure that your system has the appropriate resources to run this software. Next, start Windows as usual and proceed as follows: Starting the Installation 1. Place the AnyView Professional diskette in the appropriate diskette drive (A or B). 2. Open the File menu in the Windows Program Manager or File Manager and select Run. 3. Type [drive]:SETUP.EXE, where [drive] is the letter of the drive where you placed the AnyView Professional diskette, and press [Enter]. After Setup initializes your system, a screen appears with four options: Quick Installation, Custom Installation, View README.TXT and Exit. Choosing Quick Installation will completely setup your system with a minimum of effort. You can press Custom Installation to choose which features and options to install. View README.TXT opens the README.TXT file in the Notepad application. Exit terminates the program. *************************************************************** 1c. Quick Installation *************************************************************** Sit back and relax! Choosing Quick Installation automatically creates a directory on the same drive as your Windows directory, copies all of the AnyView Professional program files, creates an icon group and deletes any older versions of AnyView Professional that might be found. Press Restart Windows when prompted and AnyView Professional will be up and running. Though Quick Installation recognizes most drivers using its intuitive method of installation, there is a chance you may be prompted to enter an OEMSETUP.INF disk. In this case, see the section later in this chapter entitled "Installation Using a Microsoft Compliant OEMSETUP Diskette." After completing the Quick Installation, you may skip ahead to the chapter entitled, "Getting to Know AnyView Professional." Note: If you receive an advisory message from the Setup application regarding drivers that are not installed on your system, simply install them from their floppy disk or directory using your video controller's installation program. After Windows reloads, double click the AnyView Professional icon in our program group to restart Windows with the AnyView Professional driver. *************************************************************** 1d. Custom Installation *************************************************************** 1. After pressing Custom Installation, the Setup application displays the directory into which it plans to install AnyView Professional. The default directory is AVPRO on the drive which contains the Windows directory. To accept the default directory, select Continue. To select a different drive or directory, type it in and then select Continue. 2. The Custom Window appears and you are asked to select which features and options should be enabled or disabled. These aren't permanent changes -- all can be changed from within the AnyView Professional application. Features: Though all of AnyView Professional's features are installed when you run the Setup application, you can choose to disable any of them should you desire to do so. These features include Catalyst, Color WYZARD, DPI WYZARD, Green Screen, OptiMemm, and True Switch Color Depth/Resolution Switching. All of these features are enabled by default. Start Up Options: By default, the Setup application creates an AnyView Professional group for your Program Manager and puts an AnyView Professional icon into the Startup group. Either of these options may be disabled. Monitor Selection: Choose the monitor selection which most reflects your own. For information regarding interlaced and non-interlaced monitors, consult your monitor's documentation. Driver Installation: There may be times when you may to choose to install AnyView Professional without using the Setup Application's intuitive method of locating and installing video drivers. For instance, you may have received new drivers from your video controller manufacturer and you would like to install them using a Microsoft Compliant OEM setup diskette. The Setup Application's intuitive method of driver installation is chosen by default. Mouse Interface Call: By default, you can bring the AnyView Professional Toolbar directly to your mouse by double clicking on your right mouse button. Because there are other applications which may use this same setting, the Setup application gives you the choice of either setting the middle mouse button as the interface call or of having none set at all. Interface: AnyView Professional's features can be accessed through either the Toolbar or the Control Panel. Though the Toolbar is smaller and less obtrusive than its counterpart, the Control Panel is more detailed and helps you to monitor your system. "Always Topmost" indicates that you want the interface to always be visible no matter what application you may be using. By default, the Toolbar is the chosen interface and is always topmost. Video Memory: This setting allows you to choose the amount of video memory that is installed on your video controller. One megabyte of memory is chosen as default. After making your setting choices, select Continue. 3. If you have previously installed AnyView, Screen Commander for Windows, the Setup application asks whether you'd like to have it removed from your system. We recommend that you allow the Setup application to make it so. 4. Restart Windows as prompted and AnyView Professional is ready for use. *************************************************************** 1e. Network Client Station Installation *************************************************************** It's possible to install AnyView Professional on most client stations on Windows compliant networks. There are no special steps, even with networked versions of Windows. *************************************************************** 1f. Installation Using Microsoft Compliant OEM Setup Diskettes *************************************************************** There are times when AnyView Professional cannot install intuitively because it does not have enough information regarding your video controller and video drivers. When this situation occurs, the installation program will bring up a dialog screen entitled "Installation with a Microsoft Compliant OEMSETUP Diskette." At this time, please insert the diskette which was shipped with your video controller and indicate the OEMSETUP.INF file using the directory tree on the right side of the window. You can also point to an OEMSETUP.INF file that has already been installed onto your hard disk if you like -- however, this would be a directory other than the System directory. An OEMSETUP.INF file in your System directory may describe a different piece of hardware and installation would not be able to continue. Note: If you reach this window but do not have a Microsoft compliant OEMSETUP diskette, go ahead and click the Cancel button. Without access to the OEMSETUP.INF file, the TrueSwitch color depth and resolution changing features cannot be activated; however, all of AnyView Professional's other features can still be installed. If you would like to continue, choose Custom install and then uncheck the resolution and color depth switching checkboxes. Continue installation as discussed earlier in this section. You may then contact our technical support department for information on how to properly install for your video controller. Please see the "Troubleshooting" section in Appendix A for more information. After you have pointed to the proper OEMSETUP.INF file, a dialog box will appear for your display driver set up. For the next few minutes, you will confirm the OEM supplied drivers for the specific resolutions and color depths available for your video controller. The installation program will list the resolution for which it needs a driver. Below, in a list box, the installation program will highlight the driver it believes matches that resolution. If the suggestion is correct, simply choose Select. If the choice is not correct, highlight the correct driver and choose Select. If the installation program cannot locate the driver to match the resolution, the Skip button is highlighted. Because there are few video controllers that are capable of running every resolution in every color depths, it is likely you will need to skip some resolutions. If you make a mistake at any time, you can choose Start Over to make new choices. *************************************************************** 2. Trouble Shooting Contains special notes and known incompatibilities pertaining to specific applications and display controllers. *************************************************************** *************************************************************** 2a. Installation and reboot problems/errors *************************************************************** If you encounter installation problems please refer to section 6 entitled "Uninstalling AnyView Professional." *************************************************************** 2b. AnyView driver not loaded *************************************************************** If you reboot or shut off your computer without exiting Windows, the next time you run Windows, the AnyView Professional driver may not be loaded. Just run the AnyView Professional application and select 'Yes' when prompted to restart Windows. *************************************************************** 2c. Norton Desktop Version 3.0 *************************************************************** NDW 3.0 is not compatible with Color Switching On-the-Fly. Dragging an icon in 16 or 24 bit with Color Switching enabled causes an error. To avoid this you can either disable Color Switching - see the AnyView Professional Desktop Dialog Box - or do not re-arrange icons on the desktop while in 16 or 24 bit mode. *************************************************************** 2d. PC Tools for Windows *************************************************************** With Color Switching enabled, icons created by PC Tools may become distorted if you are not in 256 color mode. When creating a new icon from the File menu, when importing a group, or when installing new software, you should switch to 256 color mode before doing so. *************************************************************** 2e. Adobe Premiere *************************************************************** Switching resolutions or color depths while running Adobe Premiere will cause an error. Switch to your desired resolution and color depth before running Premiere. *************************************************************** 2f. Broderbund's Math Workshop *************************************************************** The first time Math Workshop is run, it self configures by profiling your system. This configuration will cause an error with AnyView's OptiMemm set on High. Set OptiMemm to low to install and first-run Math Workshop. *************************************************************** 2g. The Learning Center's Reader Rabbit *************************************************************** Reader Rabbit is not compatible with Color Switching. In order to run Reader Rabbit, you must disable Color Switching from the AnyView Professional Desktop Dialog. *************************************************************** 2h. Number #9 GXE, GXE 64, & GXE 64 Pro *************************************************************** The #9 GXE display drivers are not compatible with AnyView's Color Switching On-the-Fly. (However, the GXE 64 and GXE 64 Pro are supported.) On the GXE 64 and GXE 64 Pro, AnyView Professional only works with version 1.36 (or earlier) of the #9 drivers. *************************************************************** 2i. ATI Mach 8 *************************************************************** AnyView Professional is not compatible with the new ATI Mach 8 drivers (machw3.drv). You must install the Mach 32 type drivers (mach.drv). *************************************************************** 2j. ATI Mach 32 and Mach 64 *************************************************************** Color and Resolution Switching on-the-fly are incompatible with ATI's Crystal fonts. If you want to use Crystal Fonts, disable Color and Resolution Switching from the AnyView Professional Desktop Dialog, and then enable Crystal Fonts. On some Mach 32 cards, Resolution or Color Switching may take several seconds, during which time your screen will remain black. This is normal for the ATI Mach 32. *************************************************************** 2k. Cirrus Logic 5434 *************************************************************** AnyView is not compatible with Version 1.2x of the Cirrus Logic 5434 display drivers. It is compatible with all previous versions. *************************************************************** 2l. Diamond Viper Pro *************************************************************** If you encounter difficulties switching into 16 million color mode (24 bit), try changing the following lines in the AVPRO.INI file, [AnyViewProSupport] section, to read as follows: Driver640x480x16M=p9100_32.drv Driver800x600x16M=p9100_32.drv *************************************************************** 2m. Western Digital WD90C31 *************************************************************** At the time of AnyView Professional's release, the Western Digital display drivers for the WD90C31 chipset will not work with Color Switching On-the-Fly. AnyView Professional will automatically disable this feature when it is installed. *************************************************************** 2n. Special notes regarding OEM provided display utilities *************************************************************** If you use a display configuration utility other than AnyView Professional to switch resolutions or color depths, the AnyView driver will not be loaded after Windows restarts. To reinstall the AnyView driver, run the AnyView Professional application and select 'Yes' when prompted to restart Windows. If you use a display configuration utility to set your monitors refresh rates, you may have to do the same. If, however, the display configuration utility allows you to continue rather than just restarting Windows, select this and then use AnyView Professional to change away from your current resolution, and then back. With some cards (Diamond Stealths, Orchid Celsius, Weitek P9x00, and more) this action will save the new refresh rate information without having to restart Windows. *************************************************************** 2o. Menu modifying applications *************************************************************** Applications such as Icon Hear-it or Plug-in that modify other application's menus may not work with the OptiMemm feature set to High. If this occurs, set OptiMemm to low. *************************************************************** 2p. Get-out-of-trouble Hot Keys *************************************************************** AnyView Professional has provided you with four hot keys to get you out of trouble quickly should you select a configuration that doesn't work properly or if you "get lost" on the Virtual Desktop. Their default settings are listed in the upper left of the "Interface" file folder: * [CTRL]+[ALT]+[R] -- Restore to Last Mode. Choosing this hot key causes AnyView Professional to return you to your last screen mode. This option is useful when exiting the Hardware Zoom or when you have chosen an invalid screen mode that renders the screen unviewable. * [CTRL]+[ALT]+[C] -- Center AnyView to Screen . This hot key brings the AnyView Professional Toolbar or Control Panel to the center of the screen * [CTRL]+[ALT]+[V] -- Restart with VGA.DRV. This key sequence is used to restore Microsoft's VGA.DRV. We recommend that you try [CTRL]+[ALT]+[R] before resorting to this key sequence. Try using [CTRL]+[ALT]+[V] if AnyView Professional does not install correctly and you are presented with a blank screen. * [CTRL]+[ALT]+[6] -- Reset Resolution to 640x480. This hot key restores the display to 640 by 480 in 256 colors. You may change these hot keys to any [CTRL]+[ALT] combination that you would find convenient. This is helpful if you find that one or more of your hot key combinations conflict with those of another application. *************************************************************** 2q. Right Mouse Button Calls Toolbar *************************************************************** There are two ways to change the mouse button call. The first is to reinstall and choose "Custom". Then choose to change the mouse button under "Interface." The second way is to edit the AVPRO.INI in your AVPRO directory. If MouseHook isn't present, add it to the [AnyViewPro] section. MouseHook=on for the right mouse call button, MouseHook=middle for center mouse call button, and MouseHook=off for no mouse call. *************************************************************** 3. Technical notes on Color Depth Switching and Global Memory *************************************************************** When running image editing software, it is recommended that you switch to your desired color depth before running the application. Editing a bitmap and then switching color depths may degrade the quality of the bitmap. Saving the bitmap file at that time will save the degraded version of the bitmap. Some monochrome bitmaps (dragging an icon, for instance) won't display in color modes above 256 color mode. This is done intentionally to prevent a bug that is in many display drivers that occurs when running with AnyView Professional. Your display drivers may not have this bug, and you can determine this by adding the following line to the [AnyViewPro] section of the AVPRO.INI file located in the AVPRO directory: DIB8to1on=on When color switching is enabled, Windows will only boot up initially in 256 color mode. AnyView Professional can be configured to automatically switch you to the color mode that you were in when you exited Windows. To do this, add the following line to the [AnyViewPro] section of the AVPRO.INI file located in the AVPRO directory: BootRestoreColor=on If the system or applications you are running are displaying large bitmaps or a large number of bitmaps, a color switch may delay for a long time. This is because AnyView Professional needs to convert all of the bitmaps to the new color mode. When color switching is enabled, bitmaps that the system or applications display require more memory than with color switching disabled. It is recommended that you do not configure your desktop to display a large bitmap as your background wallpaper. (From Windows Control Panel's Desktop configuration) It is also recommended that you do not open multiple large bitmap files simultaneously, particularly when you are in a high color (32K,64K,16M) mode. If you experience memory problems, try increasing your Windows swap file size to 4 or 8 megabytes. Global Memory: With the Color Switch feature enabled, the amount of global memory available to Windows will decrease. OptiMemm cannot help with this issue, as it increases the amount of Windows Resource memory available, thus allowing you to run more applications, but not the amount of global memory available. An example of a need for a large amount of global memory is opening a very large bitmap in a high color mode, or opening multiple bitmap files simultaneously. *************************************************************** 4. Changing Video Controllers and Reinstalling AnyView Professional *************************************************************** We recommend that you reset Windows to the Microsoft VGA driver before changing your video controller. You can do this by running SETUP from the Windows directory while at the DOS level. Please see your Windows manual for more information. After inserting your new video controller, install the video controller's drivers using the manufacturer's instructions. When you have restarted Windows, run SETUP from your AnyView Professional distribution diskette and follow the installation instructions listed earlier in this section. *************************************************************** 6. Uninstalling AnyView Professional *************************************************************** Getting Back into Windows if the installation process has failed at the DOS level: If you install AnyView Professional and find that Windows will not restart after the initial reboot, SETUP has failed to configure your system correctly. In order to fix this problem and get you back into Windows, we have included a DOS level uninstaller. The DOS uninstall is located in the directory that is assigned to AnyView Professional during installation (the default installation directory is "AVPRO"). From within AnyView Professional's directory, type "AVUNINST" at the DOS prompt. This command will reset Windows to the original display driver, but it will not delete the AnyView Professional files/components. After returning to Windows you can perform a complete uninstall of AnyView Professional by using the uninstall icon located in the AnyView Professional Program Group. Uninstalling from within Windows: Should you decide for one reason or another to uninstall AnyView Professional, the provided Uninstall program will do the job quickly and efficiently. Use the AnyView Uninstall icon in the AnyView Professional program group. 1. Click on the Uninstall icon in your AnyView Professional group or run the AVUNINST.EXE file in your AnyView Pro directory from the File Manager or Program Manager. 2. After Uninstall deletes AnyView Professional's icons from the Program Manager, a dialog box will ask you to restart Windows. Uninstall will be complete after you restart. f:\12000 essays\technology & computers (295)\Gemstone 3.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Josh Barrie 1/15/97 Thomas Alva Edison CAPTURE: I.) What would we do without incandescent light bulbs, phonographs, motion picture cameras - things we all take for granted? A.) They were all invented by me, Thomas Alva Edison. B.) I made more than 1,000 inventions. MOTIVATE: II.) How can I capture sound and motion in time? Is there a way to make a practical light bulb? Can people talk to each other from great distances? These are some of the questions I asked myself to make some of my most famous inventions. ASSERT: III.) All of my inventions have changed many lives. Iım here to talk about these. PREVIEW: IV.) Letıs discuss my most popular and important inventions I completed during my 84 year life. A.) My incandescent light bulb B.) My phonograph C.) My improved telephone POINT SUPPORT: V.) My first invention I am going to talk about is my incandescent light bulb. A.) There was already an electric light bulb out, called an arc bulb, but it was way to bright for practical use. B.) My incandescent light bulb has a special wire, or filament, made out of tungsten. VI.) Another invention Iıll refer to is the phonograph. A.) You may know this as a recorder and player. B.) It was crude, but it worked and was used for a long time. C.) It had a long tube with a funnel at the end where you talk into it. The rest of machine looked sort of like a typewriter. VII.) My final invention Iıll talk about, is my improved telephone. A.) There were already telephones out, but they had low quality sound and short range. B.) I improved it so it sounded a better and had a farther range. ENDING: VIII.) When you turn on a light, go to a movie, call your friend on the phone, or listen to CDıs, remember, it was all made possible by me, Thomas Edison, and my innovative mind. f:\12000 essays\technology & computers (295)\Get Informed.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Get Informed! Buying the right computer can be complicating. Because of this many people are detoured from using or purchasing a very beneficial machine. Some people have questions about memory, Windows95, and choosing the best system to purchase. Hopefully, I can clear up some of this terms and inform you on what hardware is available. How much memory do you really need? As much as you can get. Due to todays sloppy programmers, you can't have too much memory. Today's software is about 50 percent wasted code. That means that there is a bunch of memory being used by your computer to do absolutely nothing. It's not like in the past when a programmer had to get a program to run under 512K. Programmers think you have unlimited memory. As a result, the programmers don't worry about how much memory they use. When writing a program, programmers use compilers like Visual C++. When they use prewritten routines from compilers, it adds a lot of useless data. Instead of removing this useless data, the lazy programmer leaves it. Not only does this effect you memory, it also effects how much hard drive space you need. The bigger the program, the more space it takes to save physically. I wouldn't suggest buying anything under a 2 geg hard drive. Why? Because by the time you load you system (Windows95, DOS) and other software; your hard drive is already filled up. How are you going to save your document you wrote in WordPerfect when your hard drive is full? It's usually cheaper in the long run to buy the biggest hard drive available. Plus, you always want to have room for your games. After all, who wants to spend their whole life working? As far as processors, I suggest the Cryrix 6x686 166+. It's the best processor for the buck. It's one of the fastest. The processor costs about $300 cheaper then the Pentium version. Its got plenty of processing power to play those high graphic 3D games and make your Internet browser fly. It's also a necessity for programs like Auto Cad 3D and Adobe Photoshop. For video, I suggest at least a 2 meg, Mpeg3 compatible video card. The best all around video card I think is the Maxtor Millennium 3D. It comes in 2meg, 4meg, and 8meg cards. The 4meg card runs around $230.00. You can't beat that. The reason you want the most memory on your video card that you can afford is the more memory you have, the faster the graphics and more colors you can display. The memory on a video card is used for loading up screen pages in advance before they're on your screen. For example, when you're watching a AVI or Mpeg movie. The computer has already loaded four screens of that movie before the computer needs it. This means you don't wait for it to load. A sign of not having enough video memory is when you're watching a AVI movie, you might see flicker or the movie stalls. This is because you're waiting for the computer to load the images up. Windows 95. Is all the hype true? NO! Windows 95 has a lot of bugs in it. Most of the problems I've seen are in the installing part. When you go to install new hardware or software, you don't have complete control of what your computer does. Windows 95 wants to make all the decisions for you. Unfortunately, most of the time it doesn't make the right decisions. There are ways to get around this, it just takes a little patience. The biggest problem I've had is taking out software and hardware. Windows deletes all the drivers and the programs, but never cleans out the main system file. This means the program is gone but your system thinks it's still there. This can give you a lot of errors and, in some cases, cause your computer to crash. There is thankfully, software being written right now to solve this problem. Whether or not you like Windows 95, Microsoft has cornered the market and most software written today is for Windows 95. I personally think Windows 95 could be a great system if Microsoft would take the time to fix all the bugs and minor irritations instead of spending there time trying figure out a new way to scam the PC user, like Making Windows 97. Hopefully, I haven't confused you. Instead, I hope I have cleared some things up for you. My best advice, to soon-to-be computer owners, is to take your time in buying your system. Do some research. Don't believe all the hype. Computer salesmen don't make money helping you out. They make money selling you a computer for the most profit. f:\12000 essays\technology & computers (295)\Global Village Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Imagine a place where people interact in business situations, shop, play video games, do research, or study and get tutoring. Now imagine that there are no office buildings, no shopping centers, no arcades, no libraries, and no schools. These places all exist in a location called the Internet - "an anarchic eyetem (to use an oxymoron) of public and private computer networks that span the globe." (Clark 3). This technological advance not only benefits people of the present, but also brings forth future innovations. People use the Internet for many purposes, yet there are three popular reasons. First, there is the sending and receiving of messages through the electronic mail. Second, there are discussion groups with a wide range of topics in which people can join. Finally, people are free to browse into vast collection of resources (or databases) of the World Wide Web. Electronic mail (e-mail) brings a unique perception into the way of communication. Although, it did not replace the traditional means of communication such as letters and telephone calls, it has created a new method of transmitting information in a more efficient way. E-mail saves time between the interval of sending and receiving a message. Sending an e-mail message halfway around the world can arrive at its destination within a minute or two. In comparison, a letter can take from a few days to a couple of weeks, according to the distance it travels. Furthermore, e-mail is inexpensive. The cost of connection to the Internet is relatively cheaper than that of cable television. Evidently e-mail is both time-saving and cost-effective. Discussion groups are a great way to interact with others in the world and to expand the knowledge of one's horizon. The response is instantaneuos just like the telephone except it is non-verbal (typed). Discussion groups are on-line services that make use of non-verbal communication in the interest of the user. Services can range from tutor sessions to chat-lines where people just want to mingle. Communication through the Internet is a way of meeting new people. There is no racial judgement in meeting on the Internet because physical appearance is not perceived. However, attitude and personal characteristics are evident from the style in which a person talks (or types). This kind of communicaion helps narrow the gap between people and cultural differences. Communicating in discussion groups sometimes lead to even one-to-one conversations that soon enough become a link to friendship. Connections are being made when people meet each other; therefore, information on interest Web sites can be passed on. The World Wide Web (WWW) holds information that answers questions to the user. The main purpose of the WWW is to give a variety of information ranging from literature to world geography. WWW contains Web sites that are created by government agencies and institutions to business companies and individuals. WWW carries text, graphics, and sound to catch the interest of people browsing through the different Web sites. These Web sites are being added daily, while the existing sites are being revised to compensate for more updated information and interests. This growth of information will soon become a world library of topics on anything that one can imagine. A person using the Internet for one day encounters more information than a person reading in the library for a whole year. It is the convenience of the Internet that allows a person to go through an enormous amoung of information in a short period of time. This information community can pull the minds' of users closer together, thus making the world smaller. The Internet is full of people who are requesting and giving out information to the ones who are interested, since "information wants to be free." - Stewart Brand (Van der Leun 25). Hypothetically, if everyone is connected to at least one other person on the Internet, eventually everyone everyone will meet each other. In other words, the world will gradually evolve into a "global village" which can be defined as "the world, especially of the late 1900's, thought of as a village, a condition arising from shrinking distance by instantaneous world-wide electronic communication." (Nault 907). Thus, the Internet is a wonderful tool and medium in which people can interact with the information society. Afterall, information is like the building blocks of technological advancement. f:\12000 essays\technology & computers (295)\Government Intervention on the Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Government Intervention on the Internet CIS 302 - Information Systems I John J. Doe XXX-XX-XXXX March 12, 1997 During the last decade, our society has become based on the sole ability to move large amounts of information across great distances quickly. Computerization has influenced everyone's life in numerous ways. The natural evolution of computer technology and this need for ultra-fast communications has caused a global network of interconnected computers to develop. This global network allows a person to send E-mail across the world in mere fractions of a second, and allows a common person to access wealths of information worldwide. This newfound global network, originally called Arconet, was developed and funded solely by and for the U.S. government. It was to be used in the event of a nuclear attack in order to keep communications lines open across the country by rerouting information through different servers across the country. Does this mean that the government owns the Internet, or is it no longer a tool limited by the powers that govern. Generalities such as these have sparked great debates within our nation's government. This paper will attempt to focus on two high profile ethical aspects concerning the Internet and its usage. These subjects are Internet privacy and Internet censorship. At the moment, the Internet is epitome of our first amendment, free speech. It is a place where a person can speak their mind without being reprimanded for what they say or how they choose to say it. But also contained on the Internet, are a huge collection of obscene graphics, Anarchists' cookbooks, and countless other things that offend many people. There are over 30 million Internet surfers in the U.S. alone, and much is to be said about what offends whom and how. As with many new technologies, today's laws don't apply well when it comes to the Internet. Is the Internet like a bookstore, where servers can not be expected to review every title? Is it like a phone company who must ignore what it carries because of privacy; or is it like a broadcast medium, where the government monitors what is broadcast? The problem we are facing today is that the Internet can be all or none of the above depending on how it is used. Internet censorship, what does it mean? Is it possible to censor amounts of information that are all alone unimaginable? The Internet was originally designed to "find a way around" in case of broken communications lines, and it seems that explicit material keeps finding its "way around" too. I am opposed to such content on the Internet and therefore am a firm believer in Internet censorship. However, the question at hand is just how much censorship the government impose. Because the Internet has become the largest source of information in the world, legislative safeguards are indeed imminent. Explicit material is not readily available over the mail or telephone and distribution of obscene material is illegal. Therefore, there is no reason this stuff should go unimpeded across the Internet. Sure, there are some blocking devices, but they are no substitute for well-reasoned law. To counter this, the United States has set regulations to determine what is categorized as obscenity and what is not. By laws set previously by the government, obscene material should not be accessible through the Internet. The problem society is now facing is that cyberspace is like a neighborhood without a police department. "Outlaws" are now able to use powerful cryptography to send and receive uncrackable communications across the Internet. Devices set up to filter certain communications cannot filter that which cannot be read, which leads to my other topic of interest: data encryption. By nature, the Internet is an insecure method of transferring data. A single E- mail packet may pass through hundreds of computers between its source and destination. At each computer, there is a chance that the data will be archived and someone may intercept the data, private or not. Credit card numbers are a frequent target of hackers. Encryption is a means of encoding data so that only someone with the proper "key" can decode it. So far, recent attempts by the government to control data encryption have failed. They are concerned that encryption will block their monitoring capabilities, but there is nothing wrong with asserting our privacy. Privacy is an inalienable right given to us by our constitution. For example, your E-mail may be legitimate enough that encryption is unnecessary. If you we do indeed have nothing to hide, then why don't we send our paper mail on postcards? Are we trying to hide something? In comparison, is it wrong to encrypt E-mail? Before the advent of the Internet, the U.S. government controlled most new encryption techniques. But with the development of the WWW and faster home computers, they no longer have the control they once had. New algorithms have been discovered that are reportedly uncrackable even by the FBI and NSA. The government is concerned that they will be unable to maintain the ability to conduct electronic surveillance into the digital age. To stop the spread of data encryption software, they have imposed very strict laws on its exportation. One programmer, Phil Zimmerman, wrote an encryption program he called PGP (Pretty Good Privacy). When he heard of the government's intent to ban distribution encryption software, he immediately released the program to be public for free. PGP's software is among the most powerful public encryption tool available. The government has not been totally blind by the need for encryption. The banks have sponsored an algorithm called DES, that has been used by banks for decades. While to some, its usage by banks may seem more ethical, but what makes it unethical for everyone else to use encryption too? The government is now developing a new encryption method that relies on a microchip that may be placed inside just about any type of electronic equipment. It is called the Clipper chip and is 16 million times more powerful than DES and today's fastest computers would take approximately 400 billion years to decipher it. At the time of manufacture, the chips are loaded with their own unique key, and the government gets a copy. But don't worry the government promises that they will use these keys only to read traffic when duly authorized by law. But before this new chip can be used effectively, the government must get rid of all other forms of cryptography. The relevance of my two topics of choice seems to have been conveniently overlooked by our government. Internet privacy through data encryption and Internet censorship are linked in one important way. If everyone used encryption, there would be no way that an innocent bystander could stumble upon something they weren't meant to see. Only the intended receiver of an encrypted message can decode it and view its contents; the sender isn't even able to view such contents. Each coded message also has an encrypted signature verifying the sender's identity. Gone would be the hate mail that causes many problems, as well as the ability to forge a document with someone else's address. If the government didn't have ulterior motives, they would mandate encryption, not outlaw it. As the Internet grows throughout the world, more governments may try to impose their views onto the rest of the world through regulations and censorship. If too many regulations are enacted, then the Internet as a tool will become nearly useless, and our mass communication device, a place of freedom for our mind's thoughts will fade away. We must regulate ourselves as not to force the government to regulate us. If encryption is allowed to catch on, there will no longer be a need for the government to intervene on the Internet, and the biggest problem may work itself out. As a whole, we all need to rethink our approach to censorship and encryption and allow the Internet to continue to grow and mature. Works Cited Compiled Texts. University of Miami. Miami, Florida. http://www.law.miami.edu/c6.html. Lehrer, Dan. "The Secret Shares: Clipper Chips and Cyberpunks." The Nation. Oct. 10, 1994, 376-379. Messmer, Ellen. "Fighting for Justice on the New Frontier." Network World. CD-ROM database. Jan. 11, 1993. Messmer, Ellen "Policing Cyberspace." U.S. News & World Report. Jan. 23, 1995, 55-60. Webcrawler Search Results. Webcrawler. Query: Internet, censorship, and ethics. March 12, 1997. Zimmerman, Phil. Pretty Good Privacy v2.62, Online. Ftp://net-dist.mit.edu Directory: /pub/pgp/dist/pgp262dc.zip. f:\12000 essays\technology & computers (295)\Hackerne.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ INFORMATIONS TEKNOLOGI Hackere Lavet af: Philip Svendsen Fag: Informations Teknologi C+B Lĉrer: Arne HACKER Onsdag den 10. november 1993 ringer en telefon pċ Kĝbenhavns Universitet: "Hallo, du taler med sikkerhedschefen. Der er nogle hackere, som forsĝger at misbruge din adgang til universitetets computersystem. Giv mig lige dit password, sċ jeg kan komme ind og stoppe dem." Manden pċ universitetet er naturligvis skeptisk, men stemmen i telefonen tilhĝrer tydeligvis en computerekspert, og navnet pċ sikkerhedschefen er ogsċ rigtigt nok, sċ efter lidt overtalelse oplyser han sit hemmelige kodeord. Det skulle han aldrig have gjort. For stemmen i den anden ende er alt andet end sikkerhedsekspert. Han er hacker. Og for ham er kodeordet lige sċ vĉrdifuldt, som brĉkjern og dirk er det for en indbrudstyv. Efter at vĉre brudt ind i universitetets computere giver han sig til at snuse rundt i systemet. Han sĝger hverken efter forskningsresultater eller eksamensopgaver. For ham er universitetets computere kun en platform, der kan bruges til at komme videre ud i verden. Hver gang en hacker fċr magten over en ny computer, har han vundet en sejr. Det er i virkeligheden, det, det hele gĉlder om. At narre computeren til at give ham ubetinget magt over hele systemet. Sċ snart han har fċet det, er computeren uinteressant, og han hopper videre ud pċ den elektroniske informationsmotorvej, hvor flere hundrede tusinde computere er forbundet i netvĉrk verden over. Der forsĝger han efter udgangen. Hvad hackeren og hans kolleger dog ikke ved, er, at den sikkerhedsmand, hvis navn de misbrugte til at skaffe sig det hemmelige kodeord, er lige i hĉlene pċ dem. Nċede at hacke i elleve lande, fĝr fĉlden klappede Da indbruddet pċ Kĝbenhavns Universitet skete, havde sikkerhedsfolk fra Danmarks edb-center for forskning og uddannelse, Uni-C, i mċnedsvis vidst, at en gruppe danske hackere var meget aktive. Men det var endnu ikke Lykkedes dem at fċ tilpas stort overblik over hakkernes fĉrden til at sige noget prĉcist om, hvor de opererede fra. Det lykkedes til gengĉld en lille mċned senere. Onsdag den 8. december klappede fĉlden, og fire hackere. De fire unge med dĉknavnene "Le Cerveau", "Dixie", "Zephyr" og "Wed-lock" var alle mellem 17 og 23 ċr, og med anholdelsen begyndte fĝrstedel af det, der udviklede sig til Skandinaviens hidtil stĝrste hackersag. I de sidste dage fĝr anholdelserne "arbejdede" hackerne nĉsten i dĝgndrift. De brĝd dagligt ind i et halvt hundrede computersystemer og nċede ud over Danmark at besĝge Belgien, Brasilien, England, Grĉkenland, Japan, Israel, Norge, Sverige, Tyskland og USA. Smċ uregelmĉssigheder afslĝrer hackerne Ligesom i tidligere sager om hacking var sikkerhedskonsulent Jĝrgen Bo Madsen fra Uni-C ogsċ med i jagten pċ "Le Cerveau", "Dixie", "Zephyr" og "Wedlock". Men da sagen endnu ikke er fĉrdigbehandlet, afviser han at tale om netop den. Han udtaler sig kun generelt: "En hackerjagt starter altid med, at nogen opdager, at noget ikke er, som det plejer." Det kan vĉre tydelige spor, som at dele af et computersystem er blevet slettet, eller at sĉrligt mange indtaster forkerte kodeord, nċr de ringer op til systemet. I ni ud af ti tilfĉlde findes der en naturlig forklaring. Men i det tiende tilfĉlde viser mistanken om et hackerangreb sig at vĉre begrundet. Og sċ gċr jagten ind. "Nogle gange kommer vi for sent. Vi kan se tydelige spor efter hackerne - de kan fx havde oprettet en bagdĝr. Men de bruger den ikke mere, og sċ er festen forbi," fortĉller Jĝrgen Bo Madsen. En bagdĝr er et hemmeligt hul computerens sikkerhedssystem, som hackerne selv laver, efter de er brudt ind i systemet. Bagdĝren stċr altid ċben, sċ hackerne til en hver kan bryde ind i computeren uden at oplyse sċ meget som et password. Hvis hackerne derimod stadig fĉrdes i computeren, giver sikkerhedsfolkene sig i al ubemĉrkethed til at overvċge deres aktiviteter. "Vi ser, hvad de laver, og prĝver at fċ overblik over deres fĉrden. For os gĉlder det om at finde den rĝde trċd i deres aktiviteter, inden de mister interessen for maskinen og hopper videre til en anden." Det stĝrste problem for Jĝrgen Bo Madsen og hans kolleger er, at hackerne gċr mange og lange omveje for at slĝre deres spor. Det er ikke ualmindeligt, at en hacker starter med at bryde ind i en computer i Danmark og derfra hopper videre til et system i fx USA. Der springer han mċske videre mellem fire forskellige universiteter, inden han via Tyskland kommer tilbage til den computer i Danmark, han i virkeligheden ville prĝve krĉfter med. "Det gĝr det meget vanskeligt for os at spore dem. I lĝbet af kort tid kan 30 forskellige systemadministratorer vĉre involveret Jorden rundt flere gange, samme sag. Der er ikke meget James Bond over det I modsĉtning til amerikanske film, hvor popsmarte digitaldetektiver zoomer ind pċ hackerne og nagler dem til et virtuelt gerningssted med nogle fċ tastetryk, er virkeligheden knap sċ actionprĉget. "Der er ikke meget James Bond over det vi laver" siger Jĝrgen Bo Madsen og forklarer: "Langt det meste af tiden gċr med at kommunikere med andre system-administratorer og med at lĉse log-filer. En logfil er billedlig talt en "optagelse" af alt, hvad der foregċr pċ en computer. Og selvomlogfilerne nċr at blive et par dage gamle, inden Jĝrgen Bo Madsen fċr dem hjem, er de mange tal og bogstaver stadig lige spĉndende lĉsning for ham. "Efterhċnden, som jeg fċr overblik over logfilerne, lĉrer jeg ogsċ hackerne at kende. Fx kan jeg se, om det er ham den klodsede, som altid bytter om pċ bogstaverne E og R, der har vĉret der. Eller jeg kan genkende ham den superdygtige der pċ ingen tid tjekker alle afkroge af systemet og laver sig en bag dĝr sċ han altid kan vende tilbage." Nċr Jĝrgen Bo Madsen kommer til det punkt, hvor han kan overskue hackernes fĉrden og kender deres vaner, er der typisk gċet tre mċneder. I rundt regnet hundrede dage har han fulgt sporet og formentlig vĉret. Jorden rundt flere gange. Men det er ikke kun hackerjĉgeren, der lever med hackerne dag og nat. Hackerne lever ogsċ med jĉgeren. Og hackere, der om nogen mestrer kunsten at gemme sig i computere, er ogsċ eksperter i at fċ fĉrten af andre, der prĝver at gemme sig i de samme computere. Hackere har flere slĝringstaktikker. En af de mere raffinerede, som de dog kun kan bruge, hvis de har fuld kontrol med computeren, er at slette de logfiler, som vidner om deres fĉrden. En anden og langt mere udbredt taktik er at skifte rute med jĉvne mellemrum, sċ jĉgeren bliver sat et par dage tilbage i sporingsarbejdet. Jĉgeren er ikke et hak bedre end bytte Men jĉgeren lĉgger ogsċ rĝgslĝr ud og sĉtter ogsċ fĉlder op. "Vi tager chancer hele tiden. Hvis fx gerne vil have dem til at tage en anden rute, hvor vores sporingsmuligheder er bedre, eller hvis en computer er risikabel at have stċende ċben, sċ lukker vi maskinen ned. For ikke at vĉkke hackernes mistanke lĉgger vi en falsk besked om, at systemet er nede pċ grund af service eller lignende," fortĉller Jĝrgen Bo Madsen. Hvis hackerne opfĝrer sig, som hackere plejer, komer de dog fĝr eller senere til at give jĉgerne en hjĉlpende hċnd ganske ufrivilligt. Det gĝr de, fordi antallet af computere, de har kontrol over, stiger eksplosivt i lĝbet af nogle fċ mċneder. Hvert system giver adgang til en hel rĉkke nye systemer, der igen giver adgang til... De mange muligheder gĝr ofte hackerne uforsigtige. De bliver slĝsede, glemmer, at de ikke er usċrlige, og begynder at slĉkke pċ "sikkerheden". Typisk skĉrer de ned pċ antallet af computere, de bruger som mellemstationer, og gĝr det pċ den mċde nemmere for jĉgerne at fĝlge deres spor. Ikke alle tĝr melde hackerne til politiet Nċr og hvis det lykkes for Jĝrgen Bo Madsen at spore den telefonlinie, hackerne arbejder fra, er hans arbejde i og for sig slut. Alt, hvad han har tilbage at gĝre, er at ringe op til ejeren af det pċgĉldende computersystem og fortĉlle, hvad han ved. Og kun hvis ejeren ĝnsker at melde det til politiet, kan en egentlig sporing af telefonlinien komme pċ tale. Men hvis ejeren af den fĝrste computer, hackerne bruger pċ deres verdens omspĉndende indbrudsturne, fx tilhĝrer en bank, der ikke vil risikere offentlighedens sĝgelys, sċ slutter jagten fċ meter fra byttet. Ofte bliver flere firmaer angrebet samtidig. I den aktuelle sag mod de fire danske hackere var der sikre spor efter 13 digitale indbrud i Danmark. Kun ni af de angrebne henvendte sig til politiet. En af grundene til, at der "kun"blev fundet beviser for 13 tilfĉlde af hacking i Danmark, var, at mange af dem, der lċ inde med belastende materiale, nċede at slette deres spor. Det gjorde de fordi, de blev advaret. Advarslen blev spredt i hele Danmark af pressen. Da rygtet om de fĝrste anholdelser nċede den hċrde kerne i hackermiljĝet, kontaktede de pressen og fortalte, at en stor hackerrazzia var i gang. Og takket vĉre pressen gik der ikke mange timer, fĝr rygtet nċede den fjerneste afkrog af landet. Det er frĉkt! f:\12000 essays\technology & computers (295)\Hackers Information Warfare.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Abstract The popularity of the Internet has grown immeasurably in the past few years. Along with it the so-called "hacker" community has grown and risen to a level where it's less of a black market scenario and more of "A Current Affair" scenario. Misconceptions as to what a hacker is and does run rampant in everyone who thinks they understand what the Internet is after using it a few times. In the next few pages I'm going to do my best to prove the true definition of what a hacker is, how global economic electronic warfare ties into it, background on the Internet, along with a plethora of scatological material purely for your reading enjoyment. I will attempt to use the least technical computer terms I can, but in order to make my point at times I have no choice. Geoff Stafford Dr. Clark PHL 233 There are many misconceptions, as to the definition, of what a hacker truly is, in all my research this is the best definition I've found: Pretend your walking down the street, the same street you have always walked down. One day, you see a big wooden or metal box with wires coming out of it sitting on the sidewalk where there had been none. Many people won't even notice. Others might say, "Oh, a box on the street.". A few might wonder what it does and then move on. The hacker, the true hacker, will see the box, stop, examine it, wonder about it, and spend mental time trying to figure it out. Given the proper circumstances, he might come back later to look closely at the wiring, or even be so bold as to open the box. Not maliciously, just out of curiosity. The hacker wants to know how things work.(8) Hackers truly are "America's Most Valuable Resource,"(4:264) as ex-CIA Robert Steele has said. But if we don't stop screwing over our own countrymen, we will never be looked at as anything more than common gutter trash. Hacking computers for the sole purpose of collecting systems like space-age baseball cards is stupid and pointless; and can only lead to a quick trip up the river. Let's say that everyone was given an opportunity to hack without any worry of prosecution with free access to a safe system to hack from, with the only catch being that you could not hack certain systems. Military, government, financial, commercial and university systems would all still be fair game. Every operating system, every application, every network type all open to your curious minds. Would this be a good alternative? Could you follow a few simple guidelines for the offer of virtually unlimited hacking with no worry of governmental interference? Where am I going with this? Right now we are at war. You may not realize it, but we all feel the implications of this war, because it's a war with no allies, and enormous stakes. It's a war of economics. The very countries that shake our hands over the conference tables of NATO and the United Nations are picking our pockets. Whether it be the blatant theft of American R&D by Japanese firms, or the clandestine and governmentally-sanctioned bugging of Air France first-class seating, or the cloak-and-dagger hacking of the SWIFT network (1:24) by the German BND's Project Rahab(1:24), America is getting screwed. Every country on the planet is coming at us. Let's face it, we are the leaders in everything. Period. Every important discovery in this century has been by an American or by an American company. Certainly other countries have better profited by our discoveries, but nonetheless, we are the world's think-tank. So, is it fair that we keep getting shafted by these so-called "allies?". Is it fair that we sit idly by, like some old hound too lazy to scratch at the ticks sucking out our life's blood by the gallon? Hell no. Let's say that an enterprising group of computer hackers decided to strike back. Using equipment bought legally, using network connections obtained and paid for legally, and making sure that all usage was tracked and paid for, this same group began a systematic attack of foreign computers. Then, upon having gained access, gave any and all information obtained to American corporations and the Federal government. What laws would be broken? Federal Computer Crime Statutes specifically target so-called "Federal Interest Computers."(6:133) (i.e.: banks, telecommunications, military, etc.) Since these attacks would involve foreign systems, those statutes would not apply. If all calls and network connections were promptly paid for, no toll-fraud or other communications related laws would apply. International law is so muddled that the chances of getting extradited by a country like France for breaking into systems in Paris from Albuquerque is slim at best. Even more slim when factoring in that the information gained was given to the CIA and American corporations. Every hacking case involving international break-ins has been tried and convicted based on other crimes. Although the media may spray headlines like "Dutch Hackers Invade Internet" or "German Hackers Raid NASA," those hackers were tried for breaking into systems within THEIR OWN COUNTRIES...not somewhere else. A hacker who uses the handle of 8lgm in England got press for hacking world-wide, but got nailed hacking locally(3). Australia's 'Realm Hackers': Phoenix, Electron & Nom hacked almost exclusively other countries, but use of AT&T calling cards rather than Australian Telecom got them a charge of defrauding the Australian government(3). Dutch hacker RGB got huge press hacking a US military site and creating a "dquayle" account, but got nailed while hacking a local university(3). The list goes on and on. I asked several people about the workability of my proposal. Most seemed to concur that it was highly unlikely that anyone would have to fear any action by American law enforcement, or of extradition to foreign soil to face charges there. The most likely form of retribution would be eradication by agents of that government. Well, I'm willing to take that chance, but only after I get further information from as many different sources as I can. I'm not looking for anyone to condone these actions, nor to finance them. I'm only interested in any possible legal action that may interfere with my freedom. We must take the offensive, and attack the electronic borders of other countries as vigorously as they attack us, if not more so. This is indeed a war, and America must not lose. There have always been confrontations online. It's unavoidable on the net, as it is in life, to avoid unpleasantness. However, on the net the behavior is far more pronounced since it effects a much greater response from the limited online environments than it would in the real world. People behind such behavior in the real world can be dealt with or avoided, but online they cannot. In the real world, annoying people don't impersonate you in national forums. In the real world, annoying people don't walk into your room and go through your desk and run through the town showing everyone your private papers or possessions. In the real world, people can't readily imitate your handwriting or voice and insult your friends and family by letter or telephone. In the real world people don't rob or vandalize and leave your fingerprints behind. The Internet is not the real world. All of the above continually happens on the Internet, and there is little anyone can do to stop it. The perpetrators know full well how impervious they are to retribution, since the only people who can put their activities to a complete halt are reluctant to open cases against computer criminals due to the complex nature of the crimes. The Internet still clings to the anarchy of the Arpanet that spawned it, and many people would love for the status quo to remain. However, the actions of a few miscreants will force lasting changes on the net as a whole. The wanton destruction of sites, the petty forgeries, the needless break-ins and the poor blackmail attempts do not go unnoticed by the authorities. I personally could care less what people do on the net. I know it is fantasy land. I know it exists only in our minds, and should not have any long lasting effect in the real world. Unfortunately, as the net's presence grows larger and larger, and the world begins to accept it as an entity in and of itself, it will be harder to convince those inexperienced users that the net is not real. I have always played by certain rules and they have worked well for me in the years I've been online. These rules can best be summed up by the following quote, "We are taught to love all our neighbors. Be courteous. Be peaceful. But if someone lays his hands on you, send them to the cemetery." The moment someone crosses the line, and interferes with my well-being in any setting (even one that is arguably unreal such as the Internet) I will do whatever necessary to ensure that I can once again go about minding my own business unmolested. I am not alone in this feeling. There are hundreds of net-loving anarchists who don't want the extra attention and bad press brought to our little fantasy land by people who never learned how to play well as children. Even these diehard anti-authoritarians are finding themselves caught in a serious quandary: do they do nothing and suffer attacks, or do they make the phone call to Washington and try to get the situation resolved? Many people cannot afford the risk of striking back electronically, as some people may suggest. Other people do not have the skill set needed to orchestrate an all out electronic assault against an unknown, even if they pay no heed to the legal risk. Even so, should anyone attempt such retribution electronically, the assailant will merely move to a new site and begin anew. People do not like to deal with police. No one LOVES to call up their local law enforcement office and have a nice chat. Almost everyone feels somewhat nervous dealing with these figures knowing that they may just as well decide to turn their focus on you rather than the people causing problems. Even if you live your life crime-free, there is always that underlying nervousness; even in the real world. However, begin an assault directed against any individual, and I guarantee he or she will overcome such feelings and make the needed phone call. It isn't the "hacking" per se that will cause anyone's downfall nor bring about governmental regulation of the net, but the unchecked attitudes and gross disregard for human dignity that runs rampant online. What good can come from any of this? Surely people will regain the freedom to go about their business, but what of the added governmental attentions? Electronic Anti-Stalking Laws? Electronic Trespass? Electronic Forgery? False Electronic Identification? Electronic Shoplifting? Electronic Burglary? Electronic Assault? Electronic Loitering? Illegal Packet Sniffing equated as Illegal Wiretaps? (7:69) The potential for new legislation is immense. As the networks further permeate our real lives, the continual unacceptable behavior and following public outcry in that setting will force the ruling bodies to draft such laws. And who will enforce these laws? And who will watch the watchmen? Often times these issues are left to resolve themselves after the laws have passed. Is this the future we want? One of increased legislation and governmental regulation? With the development of the supposed National Information Super-Highway, the tools will be in place for a new body to continually monitor traffic for suspect activity and uphold any newly passed legislation. Do not think that the ruling forces have not considered that potential. The Information Age has arrived and most people don't recognize the serious nature behind it. Computers and the related technology can either be the answer to the human races problems or a cause for the demise of the race. Right now we rely on computers too much, and have too little security to protect us if they fail. In the coming years, we will see amazing technology permeate every part of our lives, some of which will be welcomed, some won't, and some will be used against us. If we don't learn to handle the power that computers give us in the next few years, we will all pay dearly for it. Remember the warning. The future is here now and most people aren't ready to handle it. References 1. Timothy Haight, "High Tech Spies", Time Magazine, July 5, 1993, p.24 2. Mark Ludwig, "Beyond van Eck Phreaking", Consumertronics, 1988, p.47 3. 2600: The Hacker Quarterly, Summer 1992 4. Winn Schwartau. Chaos on the Electronic Superhighway. New York, NY; Thunder Mouth's Press. 1994, p.264-267. 5. Phrack, Issue #46 6. Neil Munro, "Microwave Weapon Stuns Iraqis", Defense News, April 15, 1992, p.133. 7. Alvin and Heidi Toffler, War and Anti-War. Pittsburgh, PA. Little, Brown and Co., 1993, p.69. 8. Hactic, Issue #16 - Fall 1994 Note: Bibliographies number 3,5, and 8 are underground electronic magazines published and spread entirely through the Internet and bulletin boards. There are no page numbers, no authors names are ever given (for security reasons due to content), and obviously no publisher. f:\12000 essays\technology & computers (295)\Hackers Manifesto.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Hackers Manifesto - Another one got caught today, it's all over the papers. "Teenager Arrested in Computer Crime Scandal", "Hacker Arrested after Bank Tampering"... Damn kids. They're all alike. But did you, in your three-piece psychology and 1950's technobrain, ever take a look behind the eyes of the hacker? Did you ever wonder what made him tick, what forces shaped him, what may have molded him? I am a hacker, enter my world... Mine is a world that begins with school... I'm smarter than most of the other kids, this crap they teach us bores me... Damn underachiever. They're all alike. I'm in junior high or high school. I've listened to teachers explain for the fifteenth time how to reduce a fraction. I understand it. "No, Ms. Smith, I didn't show my work. I did it in my head..." Damn kid. Probably copied it. They're all alike. I made a discovery today. I found a computer. Wait a second, this is cool. It does what I want it to. If it makes a mistake, it's because I screwed it up. Not because it doesn't like me... Or feels threatened by me... Or thinks I'm a smart ass... Or doesn't like teaching and shouldn't be here... Damn kid. All he does is play games. They're all alike. And then it happened... a door opened to a world... rushing through the phone line like heroin through an addict's veins, an electronic pulse is sent out, a refuge from the day-to-day incompetencies is sought... a board is found. "This is it... this is where I belong..." I know everyone here... even if I've never met them, never talked to them, may never hear from them again... I know you all... Damn kid. Tying up the phone line again. They're all alike... You bet your ass we're all alike... we've been spoon-fed baby food at school when we hungered for steak... the bits of meat that you did let slip through were pre-chewed and tasteless. We've been dominated by sadists, or ignored by the apathetic. The few that had something to teach found us willing pupils, but those few are like drops of water in the desert. This is our world now... the world of the electron and the switch, the beauty of the baud. We make use of a service already existing without paying for what could be dirt-cheap if it wasn't run by profiteering gluttons, and you call us criminals. We explore... and you call us criminals. We seek after knowledge... and you call us criminals. We exist without skin color, without nationality, without religious bias... and you call us criminals. You build atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe it's for our own good, yet we're the criminals. Yes, I am a criminal. My crime is that of curiosity. My crime is that of judging people by what they say and think, not what they look like. My crime is that of outsmarting you, something that you will never forgive me for. I am a hacker, and this is my manifesto. You may stop this individual, but you can't stop us all... after all, we're all alike. f:\12000 essays\technology & computers (295)\Hacking to Peaces.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Hacking to Peaces The "Information Superhighway" possesses common traits with a regular highway. People travel on it daily and attempt to get to a predetermined destination. There are evil criminals who want to violate citizens in any way possible. A reckless driver who runs another off the road is like a good hacker. Hacking is the way to torment people on the Internet. Most of the mainstream hacking community feel that it is their right to confuse others for their entertainment. Simply stated, hacking is the intrusion into a computer for personal benefit. The motives do not have to be focused on profit because many do it out of curiosity. Hackers seek to fulfill an emptiness left by an inadequate education. Do hackers have the right to explore wherever he or she wants on the Internet (with or without permission), or is it the right of the general population to be safe from their trespasses? To tackle this question, people have to know what a hacker is. The connotation of the word 'hacker' is a person that does mischief to computer systems, like computer viruses and cybercrimes. "There is no single widely-used definition of computer-related crime, [so] computer network users and law enforcement officials must distinguish between illegal or deliberate network abuse versus behavior that is merely annoying. Legal systems everywhere are busily studying ways of dealing with crimes and criminals on the Internet" (Voss, 1996, p. 2). There are ultimately three different views on the hacker controversy. The first is that hacking or any intrusion on a computer is just like trespassing. Any electric medium should be treated just like it were tangible, and all laws should be followed as such. On the other extreme are the people that see hacking as a privilege that falls under the right of free speech. The limits of the law should be pushed to their farthest extent. They believe that hacking is a right that belongs to the individual. The third group is the people that are in the middle of the two groups. These people feel that stealing information is a crime, and that privacy is something that hackers should not invade. They are not as right wing as the people that feel that hackers should be eliminated. Hackers have their own ideals to how the Internet should operate. The fewer laws there are to impede a hacker's right to say and do what they want, the better they feel. Most people that do hack follow a certain profile. Most of them are disappointed with school, feeling "I'm smarter than most of the other kids, this crap they teach us bores me" (Mentor, 1986, p. 70). Computers are these hackers only refuge, and the Internet gives them a way to express themselves. The hacker environment hinges on people's first amendment right to freedom of speech. Some justify their actions of hacking by saying that the hacking that they do is legitimate. Some hackers that feel their pastime is legitimate and only do it for the information; others do it for the challenge. Still other hackers feel it is their right to correct offenses done to people by large corporations or the government. Hackers have brought it to the public's attention that the government has information on people, without the consent of the individual. Was it a crime of the hacker to show that the government was intruding on the privacy of the public? The government hit panic stage when reports stated that over 65% of the government's computers could be hacked into 95% of the time (Anthes, 1996, p. 21). Other hackers find dubious business practices that large corporations try to accomplish. People find this information helpful and disturbing. However, the public may not feel that the benefits out weigh the problems that hackers can cause. When companies find intruders in their computer system, they strengthen their security, which costs money. Reports indicate that hackers cost companies a total of $150 to $300 billion a year (Steffora & Cheek, 1994, p. 43). Security system implementation is necessary to prevent losses. The money that companies invest on security goes into the cost of the products that they sell. This, in turn, raises the prices of the products, which is not popular to the public. The government feels that it should step in and make the choices when it comes to the control of cyberspace. However, the government has a tremendous amount of trouble with handling the laws dealing with hacking. What most of the law enforcement agencies follow is the "Computer Fraud and Abuse Act of 1986." "Violations of the Computer Fraud and Abuse Act include intrusions into government, financial, most medical, and Federal interest computers. Federal interest computers are defined by law as two or more computers involved in the criminal offense, which are located in different states. Therefore, a commercial computer which is the victim of an intrusion coming from another state is a "Federal interest" computer" (Federal, 1996, p. 1). Most of the time, the laws have to be extremely specific, and hackers find loopholes in these laws, ultimately getting around them. Another problem with the laws is the people that make the laws. Legislators have to be familiar with high-tech materials that these hackers are using, but most of them know very little about computer systems. The current law system is unfair; it tramples over the rights of the individual, and is not productive, as illustrated in the following case. David LaMacchia used his computers as "distribution centers for illegally copied software. In this case, the law was not prepared to handle whatever crimes may have been committed. The judge ruled that there was no conspiracy and dismissed the case. If statutes were in place to address the liability taken on by a BBS operator for the materials contained on the system, situations like this might be handled very differently" (Voss, 1996, p. 2). The government is not ready to handle the continually expanding reaches of the Internet. If the government cannot handle the hackers, then who should judge the limits of hacking? This decision has to be in the placed in the hands of the public, but in all probability, the stopping of hackers will never happen. The hacker's mentality stems from boredom and a need for adventure, and any laws or public beliefs that try to suppress that cannot. Every institution that they have encountered has oppressed them, and hacking is the hacker's only means for release, the government or public cannot take that away from them. That is not necessarily a bad thing. Hacking can bring some good results; especially bringing oppressing bodies (like the government and large corporations) to their knees by releasing information that shows how suppressive they have been. However, people that hack to annoy or to destroy are not valid in their reasoning. Nothing is accomplished by mindless destruction, and other than being a phallic display, it serves no purpose. Laws and regulations should limit these people's capabilities to cause havoc. Hacking is something that will continue to be a debate in and out of the computer field, but maybe someday the public will accept hackers. On the converse, maybe the extreme hackers will calm down and follow the accepted behaviors. References Anthes, G. H. (1996, September 16). Few Gains Made Against Hackers. Computerworld, 30(38). 21. Federal Bureau of Investigation. (1997, February). Federal Bureau of Investigation National Computer Crime Squad. [Internet]. Available: World Wide Web, http://www.fbi.gov/ programs/nccs/compcrim.htm Mentor, The. (1986). Hacker's Manifesto, or The Conscience of a Hacker. In Victor J. Vitanza (Ed.), CyberReader (pp. 70-71). Boston: Allyn and Bacon. Steffora, A. & Martin Cheek. (1994, February 07). Hacking Goes Legit. Industry Week, 243(3). 43-44, 46. Voss, Natalie D. (1996, December). Crime on the Internet. Jones Telecommunication and Multimedia Encyclopedia. [Internet]. Available: World Wide Web, http://www.digitalcentury.com/encyclo/update/crime.html f:\12000 essays\technology & computers (295)\Hebrew Text and Fonts.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Hebrew Text and Fonts Today's written language is quickly becoming history. Just as the carved tablet has become a conversation piece in the archeologist's living room, the written language is quickly becoming as ancient as the dead sea scrolls. A new form of visual communication is taking over the entire world. Languages from across this widespread planet are now becoming more accessible to ever culture. As the pen and pencil begin to disappear into the history books, keyboards and monitors are making it easier for people to communicate in fast and effective ways. The Hebrew Language has always been mysterious and bastardized , composed of ancient Greek and Egyptian symbol derivatives. The language eventually became independant, although it remains very mysterious, and is used mainly by the Israelites. Hebrew writing has now taken a new form , a form of which the English language has taken for many years. This new form called "type" is not new by any means, however, up until a few years ago, it was impossible to find a Hebrew Typeface on any word processing unite unless it was a specialized typewriter made in Jerusalem. The new Hebrew type has now been transformed into a computer compatible typeface found in two forms; script and print. The script form of the Hebrew type is equal to the commonly used italic form of the English typeface. Hebrew print form is a more linear and boxy form of the hebrew lettering. The Hebrew fonts and word processing software is easily downloadable to anyone though access to the internet. These programs are not compatible with English software but work on their own to allow for the ease of typing and printing of Hebrew documents. They also allow for communication and access of the Hebrew language through the internet and e-mail. Through this new step we see that the written language has taken another step forward in it's evolution. Language has become more easily understood by other cultures and has diminished the distance and the miscommunication between what at times seems to be a completely different world. f:\12000 essays\technology & computers (295)\history of computers in America.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ History of the Computer Industry in America Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such a device that changes the way we work, live, and play is a special one, indeed. A machine that has done all this and more now exists in nearly every business in the U.S. and one out of every two households (Hall, 156). This incredible invention is the computer. The electronic computer has been around for over a half-century, but its ancestors have been around for 2000 years. However, only in the last 40 years has it changed the American society. From the first wooden abacus to the latest high-speed microprocessor, the computer has changed nearly every aspect of peoples lives for the better. The earliest existence of the modern day computer ancestor is the abacus. These date back to almost 2000 years ago. It is simply a wooden rack holding parallel wire on which beads are strung. When these beads are moved along the wire according to "programming" rules that the user must memorize, all ordinary arithmetic operations can be performed (Soma, 14). The next innovation in computers took place in 1694 when Blaise Pascal invented the first digital calculating machine. It could only add numbers and they had to be entered by turning dials. It was designed to help Pascal's father who was a tax collector (Soma, 32). In the early 1800, a mathematics professor named Charles Babbage designed an automatic calculation machine. It was steam powered and could store up to 1000 50-digit numbers. Built into his machine were operations that included everything a modern general-purpose computer would need. It was programmed by and stored data on cards with holes punched in them, appropriately called punch cards. His inventions were failures for the most part because of the lack of precision machining techniques used at the time and the lack of demand for such a device (Soma, 46). After Babbage, people began to lose interest in computers. However, between 1850 and 1900 there were great advances in mathematics and physics that began to rekindle the interest (Osborne, 45). Many of these new advances involved complex calculations and formulas that were very time consuming for human calculation. The first major use for a computer in the U.S. was during the 1890 census. Two men, Herman Hollerith and James Powers, developed a new punched-card system that could automatically read information on cards without human intervention (Gulliver, 82). Since the population of the U.S. was increasing so fast, the computer was an essential tool in tabulating the totals. These advantages were noted by commercial industries and soon led to the development of improved punch-card business-machine systems by International Business Machines (IBM), Remington-Rand, Burroughs, and other corporations. By modern standards the punched-card machines were slow, typically processing from 50 to 250 cards per minute, with each card holding up to 80 digits. At the time, however, punched cards was an enormous step forwards; they provided a means of input, output, and memory storage on a massive scale. For more than 50 years following their first use, punched-card machines did the bulk of the world's business computing and a good portion of the computing work in science (Chposky, 73). By the late 1930's punched-card machine techniques had become so well established and reliable that Howard Hathaway Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts. Aiken's machine, called the Harvard Mark I, handled 23-digit numbers and could perform all four arithmetic operations. Also, it had special built-in programs to handle logarithms and trigonometric functions. The Mark I was controlled from prepunched paper tape. Output was by card punch and electric typewriter. It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations without human intervention (Chposky, 103). The outbreak of World War II produced a desperate need for computing capability, especially for the military. New weapons' systems were produced which needed trajectory tables and other essential data. In 1942, John P. Eckert, John W. Mauchley, and their associates at the University of Pennsylvania decided to build a high-speed electronic computer to do the job. This machine became known as ENIAC, for "Electrical Numerical Integrator And Calculator". It could multiply two numbers at the rate of 300 products per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was thus about 1,000 times faster than the previous generation of computers (Dolotta, 47). ENIAC used 18,000 standard vacuum tubes, occupied 1800 square feet of floor space, and used about 180,000 watts of electricity. It used punched-card input and output. The ENIAC was very difficult to program because one had to essentially re-wire it to perform whatever task he wanted the computer to do. It was, however, efficient in handling the particular programs for which it had been designed. ENIAC is generally accepted as the first successful high-speed electronic digital computer and was used in many applications from 1946 to 1955 (Dolotta, 50). Mathematician John von Neumann was very interested in the ENIAC. In 1945 he undertook a theoretical study of computation that demonstrated that a computer could have a very simple and yet be able to execute any kind of computation effectively by means of proper programmed control without the need for any changes in hardware. Von Neumann came up with incredible ideas for methods of building and organizing practical, fast computers. These ideas, which came to be referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted (Hall, 73). The first wave of modern programmed electronic computers to take advantage of these improvements appeared in 1947. This group included computers using random access memory (RAM), which is a memory designed to give almost constant access to any particular piece of information (Hall, 75). These machines had punched-card or punched-tape input and output devices and RAMs of 1000-word capacity. Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2500 small electron tubes. This was quite an improvement over the earlier machines. The first-generation stored-program computers required considerable maintenance, usually attained 70% to 80% reliable operation, and were used for 8 to 12 years. Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in several aspects of advanced programming. This group of machines included EDVAC and UNIVAC, the first commercially available computers (Hazewindus, 102). The UNIVAC was developed by John W. Mauchley and John Eckert, Jr. in the 1950's. Together they had formed the Mauchley-Eckert Computer Corporation, America s first computer company in the 1940's. During the development of the UNIVAC, they began to run short on funds and sold their company to the larger Remington-Rand Corporation. Eventually they built a working UNIVAC computer. It was delivered to the U.S. Census Bureau in 1951 where it was used to help tabulate the U.S. population (Hazewindus, 124). Early in the 1950s two important engineering discoveries changed the electronic computer field. The first computers were made with vacuum tubes, but by the late 1950's computers were being made out of transistors, which were smaller, less expensive, more reliable, and more efficient (Shallis, 40). In 1959, Robert Noyce, a physicist at the Fairchild Semiconductor Corporation, invented the integrated circuit, a tiny chip of silicon that contained an entire electronic circuit. Gone was the bulky, unreliable, but fast machine; now computers began to become more compact, more reliable and have more capacity (Shallis, 49). These new technical discoveries rapidly found their way into new models of digital computers. Memory storage capacities increased 800% in commercially available machines by the early 1960s and speeds increased by an equally large margin. These machines were very expensive to purchase or to rent and were especially expensive to operate because of the cost of hiring programmers to perform the complex operations the computers ran. Such computers were typically found in large computer centres--operated by industry, government, and private laboratories--staffed with many programmers and support personnel (Rogers, 77). By 1956, 76 of IBM's large computer mainframes were in use, compared with only 46 UNIVAC's (Chposky, 125). In the 1960s efforts to design and develop the fastest possible computers with the greatest capacity reached a turning point with the completion of the LARC machine for Livermore Radiation Laboratories by the Sperry-Rand Corporation, and the Stretch computer by IBM. The LARC had a core memory of 98,000 words and multiplied in 10 microseconds. Stretch was provided with several ranks of memory having slower access for the ranks of greater capacity, the fastest access time being less than 1 microseconds and the total capacity in the vicinity of 100 million words (Chposky, 147). During this time the major computer manufacturers began to offer a range of computer capabilities, as well as various computer-related equipment. These included input means such as consoles and card feeders; output means such as page printers, cathode-ray-tube displays, and graphing devices; and optional magnetic-tape and magnetic-disk file storage. These found wide use in business for such applications as accounting, payroll, inventory control, ordering supplies, and billing. Central processing units (CPUs) for such purposes did not need to be very fast arithmetically and were primarily used to access large amounts of records on file. The greatest number of computer systems were delivered for the larger applications, such as in hospitals for keeping track of patient records, medications, and treatments given. They were also used in automated library systems and in database systems such as the Chemical Abstracts system, where computer records now on file cover nearly all known chemical compounds (Rogers, 98). The trend during the 1970s was, to some extent, away from extremely powerful, centralized computational centres and toward a broader range of applications for less-costly computer systems. Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, began using computers of relatively modest capability for controlling and regulating their activities. In the 1960s the programming of applications problems was an obstacle to the self-sufficiency of moderate-sized on-site computer installations, but great advances in applications programming languages removed these obstacles. Applications languages became available for controlling a great range of manufacturing processes, for computer operation of machine tools, and for many other tasks (Osborne, 146). In 1971 Marcian E. Hoff, Jr., an engineer at the Intel Corporation, invented the microprocessor and another stage in the development of the computer began (Shallis, 121). A new revolution in computer hardware was now well under way, involving miniaturization of computer-logic circuitry and of component manufacture by what are called large-scale integration techniques. In the 1950s it was realized that "scaling down" the size of electronic digital computer circuits and parts would increase speed and efficiency and improve performance. However, at that time the manufacturing methods were not good enough to accomplish such a task. About 1960 photo printing of conductive circuit boards to eliminate wiring became highly developed. Then it became possible to build resistors and capacitors into the circuitry by photographic means (Rogers, 142). In the 1970s entire assemblies, such as adders, shifting registers, and counters, became available on tiny chips of silicon. In the 1980s very large scale integration (VLSI), in which hundreds of thousands of transistors are placed on a single chip, became increasingly common. Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages. The size-reduction trend continued with the introduction of personal computers, which are programmable machines small enough and inexpensive enough to be purchased and used by individuals (Rogers, 153). One of the first of such machines was introduced in January 1975. Popular Electronics magazine provided plans that would allow any electronics wizard to build his own small, programmable computer for about $380 (Rose, 32). The computer was called the Altair 8800. Its programming involved pushing buttons and flipping switches on the front of the box. It didn't include a monitor or keyboard, and its applications were very limited (Jacobs, 53). Even though, many orders came in for it and several famous owners of computer and software manufacturing companies got their start in computing through the Altair. For example, Steve Jobs and Steve Wozniak, founders of Apple Computer, built a much cheaper, yet more productive version of the Altair and turned their hobby into a business (Fluegelman, 16). After the introduction of the Altair 8800, the personal computer industry became a fierce battleground of competition. IBM had been the computer industry standard for well over a half-century. They held their position as the standard when they introduced their first personal computer, the IBM Model 60 in 1975 (Chposky, 156). However, the newly formed Apple Computer company was releasing its own personal computer, the Apple II (The Apple I was the first computer designed by Jobs and Wozniak in Wozniak s garage, which was not produced on a wide scale). Software was needed to run the computers as well. Microsoft developed a Disk Operating System (MS-DOS) for the IBM computer while Apple developed its own software system (Rose, 37). Because Microsoft had now set the software standard for IBMs, every software manufacturer had to make their software compatible with Microsoft's. This would lead to huge profits for Microsoft (Cringley, 163). The main goal of the computer manufacturers was to make the computer as affordable as possible while increasing speed, reliability, and capacity. Nearly every computer manufacturer accomplished this and computers popped up everywhere. Computers were in businesses keeping track of inventories. Computers were in colleges aiding students in research. Computers were in laboratories making complex calculations at high speeds for scientists and physicists. The computer had made its mark everywhere in society and built up a huge industry (Cringley, 174). The future is promising for the computer industry and its technology. The speed of processors is expected to double every year and a half in the coming years. As manufacturing techniques are further perfected the prices of computer systems are expected to steadily fall. However, since the microprocessor technology will be increasing, it's higher costs will offset the drop in price of older processors. In other words, the price of a new computer will stay about the same from year to year, but technology will steadily increase (Zachary, 42) Since the end of World War II, the computer industry has grown from a standing start into one of the biggest and most profitable industries in the United States. It now comprises thousands of companies, making everything from multi-million dollar high-speed supercomputers to printout paper and floppy disks. It employs millions of people and generates tens of billions of dollars in sales each year (Malone, 192). Surely, the computer has impacted every aspect of people's lives. It has affected the way people work and play. It has made everyone s life easier by doing difficult work for people. The computer truly is one of the most incredible inventions in history. Works Cited Chposky, James. Blue Magic. New York: Facts on File Publishing. 1988. Cringley, Robert X. Accidental Empires. Reading, MA: Addison Wesley Publishing, 1992. Dolotta, T.A. Data Processing: 1940-1985. New York: John Wiley & Sons, 1985. Fluegelman, Andrew. A New World, MacWorld. San Jose, Ca: MacWorld Publishing, February, 1984 (Premire Issue). Hall, Peter. Silicon Landscapes. Boston: Allen & Irwin, 1985 Gulliver, David. Silicon Valley and Beyond. Berkeley, Ca: Berkeley Area Government Press, 1981. Hazewindus, Nico. The U.S. Microelectronics Industry. New York: Pergamon Press, 1988. Jacobs, Christopher W. The Altair 8800, Popular Electronics. New York: Popular Electronics Publishing, January 1975. Malone, Michael S. The Big Scare: The U.S. Computer Industry. Garden City, NY: Doubleday & Co., 1985. Osborne, Adam. Hyper growth. Berkeley, Ca: Idthekkethan Publishing Company, 1984. Rogers, Everett M. Silicon Valley Fever. New York: Basic Books, Inc. Publishing, 1984. Rose, Frank. West of Eden. New York: Viking Publishing, 1989. Shallis, Michael. The Silicon Idol. New York: Shocken Books, 1984. Soma, John T. The History of the Computer. Toronto: Lexington Books, 1976. Zachary, William. The Future of Computing, Byte. Boston: Byte Publishing, August 1994. f:\12000 essays\technology & computers (295)\History of Computers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ENG 121 The volume and use of computers in the world are so great, they have become difficult to ignore anymore. Computers appear to us in so many ways that many times, we fail to see them as they actually are. People associated with a computer when they purchased their morning coffee at the vending machine. As they drove themselves to work, the traffic lights that so often hampered us are controlled by computers in an attempt to speed the journey. Accept it or not, the computer has invaded our life. The origins and roots of computers started out as many other inventions and technologies have in the past. They evolved from a relatively simple idea or plan designed to help perform functions easier and quicker. The first basic type of computers were designed to do just that; compute!. They performed basic math functions such as multiplication and division and displayed the results in a variety of methods. Some computers displayed results in a binary representation of electronic lamps. Binary denotes using only ones and zeros thus, lit lamps represented ones and unlit lamps represented zeros. The irony of this is that people needed to perform another mathematical function to translate binary to decimal to make it readable to the user. One of the first computers was called ENIAC. It was a huge, monstrous size nearly that of a standard railroad car. It contained electronic tubes, heavy gauge wiring, angle-iron, and knife switches just to name a few of the components. It has become difficult to believe that computers have evolved into suitcase sized micro-computers of the 1990's. Computers eventually evolved into less archaic looking devices near the end of the 1960's. Their size had been reduced to that of a small automobile and they were processing segments of information at faster rates than older models. Most computers at this time were termed "mainframes" due to the fact that many computers were linked together to perform a given function. The primary user of these types of computers were military agencies and large corporations such as Bell, AT&T, General Electric, and Boeing. Organizations such as these had the funds to afford such technologies. However, operation of these computers required extensive intelligence and manpower resources. The average person could not have fathomed trying to operate and use these million dollar processors. The United States was attributed the title of pioneering the computer. It was not until the early 1970's that nations such as Japan and the United Kingdom started utilizing technology of their own for the development of the computer. This resulted in newer components and smaller sized computers. The use and operation of computers had developed into a form that people of average intelligence could handle and manipulate without to much ado. When the economies of other nations started to compete with the United States, the computer industry expanded at a great rate. Prices dropped dramatically and computers became more affordable to the average household. Like the invention of the wheel, the computer is here to stay. The operation and use of computers in our present era of the 1990's has become so easy and simple that perhaps we may have taken too much for granted. Almost everything of use in society requires some form of training or education. Many people say that the predecessor to the computer was the typewriter. The typewriter definitely required training and experience in order to operate it at a usable and efficient level. Children are being taught basic computer skills in the classroom in order to prepare them for the future evolution of the computer age. The history of computers started out about 2000 years ago, at the birth of the abacus, a wooden rack holding two horizontal wires with beads strung on them. When these beads are moved around, according to programming rules memorized by the user, all regular arithmetic problems can be done. Another important invention around the same time was the Astrolabe, used for navigation. Blaise Pascal is usually credited for building the first digital computer in 1642. It added numbers entered with dials and was made to help his father, a tax collector. In 1671, Gottfried Wilhelm von Leibniz invented a computer that was built in 1694. It could add, and, after changing some things around, multiply. Leibnitz invented a special stopped gear mechanism for introducing the addend digits, and this is still being used. The prototypes made by Pascal and Leibnitz were not used in many places, and considered weird until a little more than a century later, when Thomas of Colmar (A.K.A. Charles Xavier Thomas) created the first successful mechanical calculator that could add, subtract, multiply, and divide. A lot of improved desktop calculators by many inventors followed, so that by about 1890, the range of improvements included: Accumulation of partial results, storage and automatic reentry of past results (A memory function), and printing of the results. Each of these required manual installation. These improvements were mainly made for commercial users, and not for the needs of science. While Thomas of Colmar was developing the desktop calculator, a series of very interesting developments in computers was started in Cambridge, England, by Charles Babbage (of which the computer store "Babbages" is named), a mathematics professor. In 1812, Babbage realized that many long calculations, especially those needed to make mathematical tables, were really a series of predictable actions that were constantly repeated. From this he suspected that it should be possible to do these automatically. He began to design an automatic mechanical calculating machine, which he called a difference engine. By 1822, he had a working model to demonstrate. Financial help from the British Government was attained and Babbage started fabrication of a difference engine in 1823. It was intended to be steam powered and fully automatic, including the printing of the resulting tables, and commanded by a fixed instruction program. The difference engine, although having limited adaptability and applicability, was really a great advance. Babbage continued to work on it for the next 10 years, but in 1833 he lost interest because he thought he had a better idea; the construction of what would now be called a general purpose, fully program-controlled, automatic mechanical digital computer. Babbage called this idea an Analytical Engine. The ideas of this design showed a lot of foresight, although this couldn't be appreciated until a full century later. The plans for this engine required an identical decimal computer operating on numbers of 50 decimal digits (or words) and having a storage capacity (memory) of 1,000 such digits. The built-in operations were supposed to include everything that a modern general - purpose computer would need, even the all important Conditional Control Transfer Capability that would allow commands to be executed in any order, not just the order in which they were programmed. As people can see, it took quite a large amount of intelligence and fortitude to come to the 1990's style and use of computers. People have assumed that computers are a natural development in society and take them for granted. Just as people have learned to drive an automobile, it also takes skill and learning to utilize a computer. Computers in society have become difficult to understand. Exactly what they consisted of and what actions they performed were highly dependent upon the type of computer. To say a person had a typical computer doesn't necessarily narrow down just what the capabilities of that computer was. Computer styles and types covered so many different functions and actions, that it was difficult to name them all. The original computers of the 1940's were easy to define their purpose when they were first invented. They primarily performed mathematical functions many times faster than any person could have calculated. However, the evolution of the computer had created many styles and types that were greatly dependent on a well defined purpose. The computers of the 1990's roughly fell into three groups consisting of mainframes, networking units, and personal computers. Mainframe computers were extremely large sized modules and had the capabilities of processing and storing massive amounts of data in the form of numbers and words. Mainframes were the first types of computers developed in the 1940's. Users of these types of computers ranged from banking firms, large corporations and government agencies. They usually were very expensive in cost but designed to last at least five to ten years. They also required well educated and experienced manpower to be operated and maintained. Larry Wulforst, in his book Breakthrough to the Computer Age, describes the old mainframes of the 1940's compared to those of the 1990's by speculating, "...the contrast to the sound of the sputtering motor powering the first flights of the Wright Brothers at Kitty Hawk and the roar of the mighty engines on a Cape Canaveral launching pad" (126). Networking computers derived from the idea of bettering communications. They were medium sized computers specifically designed to link and communicate with other computers. The United States government initially designed and utilized these type of computers in the 1960's in order to better the national response to nuclear threats and attacks. The Internet developed as a direct result of this communication system. In the 1990's, there were literally thousands of these communication computers scattered all over the world and they served as the communication traffic managers for the entire Internet. One source stated it best concerning the volume of Internet computers by revealing, "... the number of hosts on the Internet began an explosive growth. By 1988 there were over 50,000 hosts. A year later, there were three times that many" (Campbell-Kelly and Aspray 297). The personal computers that are in large abundance in the 1990's are actually very simple machines. Their basic purpose is to provide a usable platform for a person to perform given tasks easier and faster. They perform word processing, spread sheet functions and person to person communications just to name a few. They are also a great form of enjoyment as many games have been developed to play on these types of computers. These computers are the most numerous types in the world due to there relatively small cost and size. The internal workings and mechanics of personal computers primarily consisted of a central processing unit, a keyboard, a video monitor and possibly a printer unit. The central processing unit is the heart and brains of the system. The functions of the central processing unit were based on a unit called the Von Neumann computer designed in 1952. As stated in the book The Dream Machine, the Von Neumann computer consisted of an input, memory, control, arithmetic unit and output as basic processes of a central processing unit. It has become the basic design and fundamental basis for the development of most computers (Palfreman and Swade 48). Works Cited Wulforst, Harry. Breakthrough to the Computer Age. New York: Charles Scribner's Sons, 1982. Palferman, Jon and Doron Swade. The Dream Machine. London: BBC Books, 1991. Campbell-Kelly, Martin and William Aspray. Computer, A History of the Information Machine. New York: BasicBooks, 1996. f:\12000 essays\technology & computers (295)\History of The Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ History of The Internet ------------- The Internet is a worldwide connection of thousands of computer networks. All of them speak the same language, TCP/IP, the standard protocol. The Internet allows people with access to these networks to share information and knowledge. Resources available on the Internet are chat groups, e-mail, newsgroups, file transfers, and the World Wide Web. The Internet has no centralized authority and it is uncensored. The Internet belongs to everyone and to no one. The Internet is structured in a hierarchy. At the top, each country has at least one public backbone network. Backbone networks are made of high speed lines that connect to other backbones. There are thousands of service providers and networks that connect home or college users to the backbone networks. Today, there are more than fifty-thousand networks in more than one-hundred countries worldwide. However, it all started with one network. In the early 1960's the Cold War was escalating and the United States Government was faced with a problem. How could the country communicate after a nuclear war? The Pentagon's Advanced Research Projects Agency, ARPA, had a solution. They would create a non-centralized network that linked from city to city, and base to base. The network was designed to function when parts of it were destroyed. The network could not have a center because it would be a primary target for enemies. In 1969, ARPANET was created, named after its original Pentagon sponsor. There were four supercomputer stations, called nodes, on this high speed network. ARPANET grew during the 1970's as more and more supercomputer stations were added. The users of ARPANET had changed the high speed network to an electronic post office. Scientists and researchers used ARPANET to collaborate on projects and to trade notes. Eventually, people used ARPANET for leisure activities such as chatting. Soon after, the mailing list was developed. Mailing lists were discussion groups of people who would send their messages via e-mail to a group address, and also receive messages. This could be done twenty-four hours a day. Interestingly, the first group's topic was called Science Fiction Lovers. As ARPANET became larger, a more sophisticated and standard protocol was needed. The protocol would have to link users from other small networks to ARPANET, the main network. The standard protocol invented in 1977 was called TCP/IP. Because of TCP/IP, connecting to ARPANET by any other network was made possible. In 1983, the military portion of ARPANET broke off and formed MILNET. The same year, TCP/IP was made a standard and it was being used by everyone. It linked all parts of the branching complex networks, which soon came to be called the Internet. In 1985, the National Science Foundation (NSF) began a program to establish Internet access centered on its six powerful supercomputer stations across the United States. They created a backbone called NSFNET to connect college campuses via regional networks to its supercomputer centers. ARPANET officially expired in 1989. Most of the networks were gained by NSFNET. The others became parts of smaller networks. The Defense Communications Agency shut down ARPANET because its functions had been taken over by NSFNET. Amazingly, when ARPANET was turned off in June of 1990, no one except the network staff noticed. In the early 1990's the Internet experienced explosive growth. It was estimated that the number of computers connected to the Internet was doubling every year. It was also estimated that at this rapid rate of growth, everyone would have an e-mail address by the year 2020. The main cause of this growth was the creation of the World Wide Web. The World Wide Web was created at CERN, a physics laboratory in Geneva, Switzerland. The Web's development was based on the transmission of web pages over the Internet, called Hyper Text Transmission Protocol or HTTP. It is an interactive system for the dissemination and retrieval of information through web pages. The pages may consist of text, pictures, sound, music, voice, animations, and video. Web pages can link to other web pages by hypertext links. When there is hypertext on a page, the user can simply click on the link and be taken to the new page. Previously, the Internet was black and white, text, and files. The web added color. Web pages can provide entertainment, information, or commercial advertisement. The World Wide Web is the fastest growing Internet resource. In conclusion, the Internet has dramatically changed from its original purpose. It was formed by the United States government for exclusive use of government officials and the military to communicate after a nuclear war. Today, the Internet is used globally for a variety of purposes. People can send their friends an electronic "hello." They can download a recipe for a new type of lasagna. They can argue about politics on-line, and even shop and bank electronically in their homes. The number of people signing on-line is still increasing and the end it not in sight. As we approach the 21st century, we are experiencing a great transformation due to the Internet and the World Wide Web. We are breaking through the restrictions of the printed page and the boundaries of nations and cultures. ------------- Phillip Johnson f:\12000 essays\technology & computers (295)\History of UNIX.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Where did UNIX come from and why are there different versions of UNIX? The first efforts at developing a multi-user, multi-tasking operating system were begun in the 1960's in a development project called MULTICS. While working for Bell Telephone Laboratories in 1969 and 1970, Ken Thompson and Dennis Ritchie began to develop their own single-user, multi-tasking small operating system and they chose the name UNIX. Their initial goal was simply to operate their DEC PDP machines more effectively. In 1971, UNIX became multi-user and multi-tasking, but it was still just being developed by a small group of programmers who were trying to take advantage of the machines they had at hand. (In other words, this operating system that they were developing did not run on any machine made by Bell!) In 1973, Dennis Ritchie rewrote the UNIX operating system in C (a language he had developed.) And in 1975, the portability of the C programming language was used to "port" UNIX to a wide variety of hardware platforms. For legal reasons, Bell Labs was not able to market UNIX in the 1970's, though they did share this operating system with many universities - most notably UC-Berkeley. This led to some of the variations in UNIX which we see today. After the divestiture of the Bell System, their parent company, AT&T, became much more interested in marketing a commercial version of UNIX. And today we see that many companies have now licensed their own version: AT&T's System V, Versions of System V such as SCO's Xenix and IBM's AIX Berkeley's UNIX (called "BSD" for "Berkeley System Development"), Versions of Berkeley UNIX such as Sun Microsystem's SunOS, DEC's Ultrix and Carnegie Mellon University's Mach (used on the NEXT). f:\12000 essays\technology & computers (295)\Hollywood and Computer Animation.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ IS 490 SPECIAL TOPICS Computer Graphics Lance Allen May 6, 1996 Table of Contents Introduction 3 How It Was 3 How It All Began 4 Times Were Changing 6 Industry's First Attempts 7 The Second Wave 10 How the Magic is Made 11 Modeling 12 Animation 13 Rendering 13 Conclusion 15 Bibliography 16 Introduction Hollywood has gone digital, and the old ways of doing things are dying. Animation and special effects created with computers have been embraced by television networks, advertisers, and movie studios alike. Film editors, who for decades worked by painstakingly cutting and gluing film segments together, are now sitting in front of computer screens. There, they edit entire features while adding sound that is not only stored digitally, but also has been created and manipulated with computers. Viewers are witnessing the results of all this in the form of stories and experiences that they never dreamed of before. Perhaps the most surprising aspect of all this, however, is that the entire digital effects and animation industry is still in its infancy. The future looks bright. How It Was In the beginning, computer graphics were as cumbersome and as hard to control as dinosaurs must have been in their own time. Like dinosaurs, the hardware systems, or muscles, of early computer graphics were huge and ungainly. The machines often filled entire buildings. Also like dinosaurs, the software programs or brains of computer graphics were hopelessly underdeveloped. Fortunately for the visual arts, the evolution of both brains and brawn of computer graphics did not take eons to develop. It has, instead, taken only three decades to move from science fiction to current technological trends. With computers out of the stone age, we have moved into the leading edge of the silicon era. Imagine sitting at a computer without any visual feedback on a monitor. There would be no spreadsheets, no word processors, not even simple games like solitaire. This is what it was like in the early days of computers. The only way to interact with a computer at that time was through toggle switches, flashing lights, punchcards, and Teletype printouts. How It All Began In 1962, all this began to change. In that year, Ivan Sutherland, a Ph.D. student at (MIT), created the science of computer graphics. For his dissertation, he wrote a program called Sketchpad that allowed him to draw lines of light directly on a cathode ray tube (CRT). The results were simple and primitive. They were a cube, a series of lines, and groups of geometric shapes. This offered an entirely new vision on how computers could be used. In 1964, Sutherland teamed up with Dr. David Evans at the University of Utah to develop the world's first academic computer graphics department. Their goal was to attract only the most gifted students from across the country by creating a unique department that combined hard science with the creative arts. They new they were starting a brand new industry and wanted people who would be able to lead that industry out of its infancy. Out of this unique mix of science and art, a basic understanding of computer graphics began to grow. Algorithms for the creation of solid objects, their modeling, lighting, and shading were developed. This is the roots virtually every aspect of today's computer graphics industry is based on. Everything from desktop publishing to virtual reality find their beginnings in the basic research that came out of the University of Utah in the 60's and 70's. During this time, Evans and Sutherland also founded the first computer graphics company. Aptly named Evans & Sutherland (E&S), the company was established in 1968 and rolled out its first computer graphics systems in 1969. Up until this time, the only computers available that could create pictures were custom-designed for the military and prohibitively expensive. E&S's computer system could draw wireframe images extremely rapidly, and was the first commercial "workstation" created for computer-aided design (CAD). It found its earliest customers in both the automotive and aerospace industries. Times Were Changing Throughout its early years, the University of Utah's Computer Science Department was generously supported by a series of research grants from the Department of Defense. The 1970's, with its anti-war and anti-military protests, brought increasing restriction to the flows of academic grants, which had a direct impact on the Utah department's ability to carry out research. Fortunately, as the program wound down, Dr. Alexander Schure, founder and president of New York Institute of Technology (NYIT), stepped forward with his dream of creating computer-animated feature films. To accomplish this task, Schure hired Edwin Catmull, a University of Utah Ph.D., to head the NYIT computer graphics lab and then equipped the lab with the best computer graphics hardware available at that time. When completed, the lab boasted over $2 million worth of equipment. Many of the staff came from the University of Utah and were given free reign to develop both two- and three-dimensional computer graphics tools. Their goal was to soon produce a full -length computer animated feature film. The effort, which began in 1973, produced dozens of research papers and hundreds of new discoveries, but in the end, it was far too early for such a complex undertaking. The computers of that time were simply too expensive and too under powered, and the software not nearly developed enough. In fact, the first full length computer generated feature film was not to be completed until recently in 1995. By 1978, Schure could no longer justify funding such an expensive effort, and the lab's funding was cut back. The ironic thing is that had the Institute decided to patent many more of its researcher's discoveries than it did, it would control much of the technology in use today. Fortunately for the computer industry as a whole, however, this did not happen. Instead, research was made available to whomever could make good use of it, thus accelerating the technologies development. Industry's First Attempts As NYIT's influence started to wane, the first wave of commercial computer graphics studios began to appear. Film visionary George Lucas (creator of Star Wars and Indiana Jones trilogies) hired Catmull from NYIT in 1978 to start the Lucasfilm Computer Development Division, and a group of over half-dozen computer graphics studios around the country opened for business. While Lucas's computer division began researching how to apply digital technology to filmmaking, the other studios began creating flying logos and broadcast graphics for various corporations including TRW, Gillette, the National Football League, and television programs, such as "The NBC Nightly News" and "ABC World News Tonight." Although it was a dream of these initial computer graphics companies to make movies with their computers, virtually all the early commercial computer graphics were created for television. It was and still is easier and far more profitable to create graphics for television commercials than for film. A typical frame of film requires many more computer calculations than a similar image created for television, while the per-second film budget is perhaps about one-third as much income. The actual wake-up call to the entertainment industry was not to come until much later in 1982 with the release of Star-Trek II: The Wrath of Kahn. That movie contained a monumental sixty seconds of the most exciting full-color computer graphics yet seen. Called the "Genesis Effect," the sequence starts out with a view of a dead planet hanging lifeless in space. The camera follows a missiles trail into the planet that is hit with the Genesis Torpedo. Flames arc outwards and race across the surface of the planet. The camera zooms in and follows the planets transformation from molten lava to cool blues of oceans and mountains shooting out of the ground. The final scene spirals the camera back out into space, revealing the cloud-covered newly born planet. These sixty seconds may sound uneventful in light of current digital effects, but this remarkable scene represents many firsts. It required the development of several radically new computer graphics algorithms, including one for creating convincing computer fire and another to produce realistic mountains and shorelines from fractal equations. This was all created by the team at Lucasfilm's Computer Division. In addition, this sequence was the first time computer graphics were used as the center of attention, instead of being used merely as a prop to support other action. No one in the entertainment industry had seen anything like it, and it unleashed a flood of queries from Hollywood directors seeking to find out both how it was done and whether an entire film could be created in this fashion. Unfortunately, with the release of TRON later that same year and The Last Starfighter in 1984, the answer was still a decided no. Both of these films were touted as a technological tour-de-force, which, in fact, they were. The films' graphics were extremely well executed, the best seen up to that point, but they could not save the film from a weak script. Unfortunately, the technology was greatly oversold during the film's promotion and so in the end it was technology that was blamed for the film's failure. With the 1980s came the age of personal computers and dedicated workstations. Workstations are minicomputers that were cheap enough to buy for one person. Smaller was better, aster, an much, much cheaper. Advances in silicon chip technologies brought massive and very rapid increases in power to smaller computers along with drastic price reductions. The costs of commercial graphics plunged to match, to the point where the major studios suddenly could no longer cover the mountains of debt coming due on their overpriced centralized mainframe hardware. With their expenses mounting, and without the extra capital to upgrade to the newer cheaper computers, virtually every independent computer graphics studio went out of business by 1987. All of them, that is, except PDI, which went on to become the largest commercial computer graphics house in the business and to serve as a model for the next wave of studios. The Second Wave Burned twice by TRON and The Last Starfighter, and frightened by the financial failure of virtually the entire industry, Hollywood steered clear of computer graphics for several years. Behind the scenes, however, it was building back and waiting for the next big break. The break materialized in the form of a watery creation for the James Cameron 1989 film, The Abyss. For this film, the group at George Lucas' Industrial Light and Magic (ILM) created the first completely computer-generated entirely organic looking and thoroughly believable creature to be realistically integrated with live action footage and characters. This was the watery pseudopod that snaked its way into the underwater research lab to get a closer look at its human inhabitants. In this stunning effect, ILM overcame two very difficult problems: producing a soft-edged, bulgy, and irregular shaped object, and convincingly anchoring that object in a live-action sequence. Just as the 1982 Genesis sequence served as a wake-up call for early film computer graphics, this sequence for The Abyss was the announcement that computer graphics had finally come of age. A massive outpouring of computer-generated film graphics has since ensued with studios from across the entire spectrum participating in the action. From that point on, digital technology spread so rapidly that the movies using digital effects have become too numerous to list in entirety. However they include the likes of Total Recall, Toys, Terminator 2: Judgment Day, The Babe, In the Line of Fire, Death Becomes Her, and of course, Jurassic Park. How the Magic is Made Creating computer graphics is essentially about three things: Modeling, Animation, and Rendering. Modeling is the process by which 3-dimensional objects are built inside the computer; animation is about making those objects come to life with movement, and rendering is about giving them their ultimate appearance and looks. Hardware is the brains and brawn of computer graphics, but it is powerless without the right software. It is the software that allows the modeler to build a computer graphic object, that helps the animator bring this object to life, and that, in the end, gives the image its final look. Sophisticated computer graphics software for commercial studios is either purchased for $30,000 to $50,000, or developed in-house by computer programmers. Most studios use a combination of both, developing new software to meet new project needs. Modeling Modeling is the first step in creating any 3D computer graphics. Modeling in computer graphics is a little like sculpting, a little like building models with wood, plastic and glue, and a lot like CAD. Its flexibility and potential are unmatched in any other art form. With computer graphics it is possible to build entire worlds and entire realities. Each can have its own laws, its own looks, and its own scale of time and space. Access to these 3-dimensional computer realities is almost always through the 2-dimensional window of a computer monitor. This can lead to the misunderstanding that 3-D modeling is merely the production perspective drawings. This is very far from the truth. All elements created during any modeling session possess three full dimensions and at any time can be rotated, turned upside down, and viewed from any angle or perspective. In addition, they may be re-scaled, reshaped, or resized whenever the modeler chooses. Modeling is the first step in creating any 3-dimensional computer animation. It requires the artist's ability to visualize mentally the objects being built, and the craftsperson's painstaking attention to detail to bring it to completion. To create an object, a modeler starts with a blank screen an sets the scale of the computer's coordinate system for that element. The scale can be anything from microns to light years across in size. It is important that scale stays consistent with all elements in a project. A chair built in inches will be lost in a living room built in miles. The model is then created by building up layers of lines and patches that define the shape of the object. Animation While it is the modeler that contains the power of creation, it is the animator who provides the illusion of life. The animator uses the tools at his disposal to make objects move. Every animation process begins essentially the same way, with a storyboard. A storyboard is a series of still images that shows how the elements will move and interact with each other. This process is essential so that the animator knows what movements need to be assigned to objects in the animation. Using the storyboard, the animator sets up key points of movements for each object in the scene. The computer then produces motion for each object on a frame by frame basis. The final result when assembled, gives the form of fluid movement. Rendering The modeler gives form, the animator provides motion, but still the animation process is not complete. The objects and elements are nothing but empty or hollow forms without any surface. They are merely outlines until the rendering process is applied. Rendering is the most computational time demanding aspect of the entire animation process. During the rendering process, the computer does virtually all the work using software that has been purchased or written in-house. It is here that the animation finally achieves its final look. Objects are given surfaces that make it look like a solid form. Any type of look can be achieved by varying the looks of the surfaces. The objects finally look concrete. Next, the objects are lighted. The look of the lighting is affected by the surfaces of the objects, the types of lights, and the mathematical models used to calculate the behavior of light. Once the lighting is completed, it is now time to create what the camera will see. The computer calculates what the camera can see following the designs of the objects in the scene. Keep in mind that all the objects have tops, sides, bottoms, and possibly insides. Types of camera lens, fog, smoke, and other effects all have to be calculated. To create the final 2-D image, the computer scans the resulting 3D world and pulls out the pixels that the camera can see. The image is then sent to the monitor, to videotape, or to a film recorder for display. The multiple 2D still frames, when all assembled, produce the final animation. Conclusion Much has happened in the commercial computer graphics industry since the decline of the first wave of studios and the rise of the second. Software and hardware costs have plummeted. The number of well-trained animators and programmers has increased dramatically. And at last, Hollywood and the advertising community have acknowledged that the digital age has finally arrived, this time not to disappear. All these factors have lead to an explosion in both the size of existing studios and the number of new enterprises opening their doors. As the digital tide continues to rise, only one thing is certain. We have just begun to see how computer technology will change the visual arts. BIBLIOGRAPHY How Did They Do It? Computer Illusion in Film & TV , Alpha Books 1994; Christopher W. Baker Computer Graphics World, Volume 19, Number 3; March 1996; Evan Hirsch, "Beyond Reality" Computer Graphics World, Volume 19, Number 4; April 1996; Evan Marc Hirsch, "A Changing Landscape" Windows NT Magazine, Issue #7, March 1996; Joel Sloss, "There's No Business Like Show Business" Cinescape, Volume 1, Number 5; February 1995; Beth Laski, "Ocean of Dreams" 16 f:\12000 essays\technology & computers (295)\How Magnets Affect Computer Disks.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ How Magnets Affect Computer Disks BackGround One of the most commonly used Computer data storaged mediums is a Computer Disk or a Floppy. These are used in everyday life, in either our workplace or at home. These disks have many purposes, such as: Storing data: Floppies can be used to store software/data for short preiods of time, Transferring data: Floppies are used to transfer/copy data from one computer to another. Hiding data: Floppies are also sometimes used to hide sensitive or confidential data, because of the disk's small size it can be hidden very easily. Advertising: Because floppies are cheap to buy, they are used to advertise different types of software, such as: Software for the internet advertised on America Online Floppies. Floppies are also considered to be very sensitve data storage mediums. These Disks have numerous advantages and disadvanteges. Even though floppies are used so commonly they are also not very dependable. They have numerous conditions under which they should normally be kept. For example: the actuall magnetic disk inside the hard cover of the disk must NEVER be touched, the magnetic disk inside, must be protected by the metallic sliding shield, the disk must always be within the temperature of 50° to 140° Fahrenheit and the disk must never be bought near a magnet! (3M Diskettes) There are many such hazards to computer disks. Problems caused by magnets are very common. A floppy can be damaged unknowingly if it is kept near a magnet, that may be in the open or inside any device, such as a speaker phone in computer speakers or stereo or a telephone. And becuase of the common use of magnets in everyday life, more and more floppies are damaged everyday. Even though protective coverings against magnets and other electrical hazards, are available for floppies, they are not used very commonly. Therefore, floppies are not a very safe media for storage, even though they are convienient. Some of the most commonly used diskettes are by 3M and Sony and other such companies. The floppies are sold in boxes with instructions on them to not to bring floppies near magnets and other instructions of DOs and DONTs. These instructions must always be followed. Floppies have different capacities such as 720 KB (kilobytes) and 1.44 MB (megabytes). Floppies also have different sizes, 3.5" and 5.25". The most commonly used floppy is usually 3.5". It is not soft and cannot be bent, where as a 5.25" disk is soft and can be bent! A floppy is a round, flat piece of Mylar coated with ferric oxide, a rustlike substance containing tiny particles capable of holding a magnetic field, and encased in a protective plastic cover, the disk jacket. Data is stored on a floppy disk by the disk drive's read/write head, which alters the magnetic orientation of the particles. Orientation in one direction represents binary 1; orientation in the other, binary 0. Purpose The purpose of my experiment was to test Floppies to see how delicate they are near magnets and how much damage can be done to the disks and to the software on it bye a single magnet. I also hope my project will help others to be aware that computer disks are very delicate and sensitive to temperature, weather, magnets...etc. Hypothesis When the magnets are bought near the disk, the disk should be damaged internally along with the software in it. And the weakest magnet should cause the least damage and the strongest magnet should cause the most damage. Experimentation Material: Four 3.5" Floppy Diskettes. Four different Magnets One Personal Home Computer Printer Software: Windows95 Norton Disk Doctor Dos (Ver 4.00.950) Procedure: Every Floppy Diskette has 2874 sectors. This was calculated by dividing the total number of bytes on a disk by the number of bytes every sector occupies. There is a total of 1,457,664 bytes on every Floppy, and every sector occupies 512 bytes. Therefore, 512 / 1457664 is 2874, ie. the total number of sectors on every Floppy. First, I obtained the four 3.5" IBM formatted floppy diskettes (Highlandä). Next I obtained the four different magnets of different strengths and sizes and tested and verified their strengths by bringing iron filings near each of them and observing how much of iron filings each one of them attracted and then noting which magnet was the strongest and which was the weakest in order. Then I tested each of the disks for existing errors by using a program called Norton Disk Doctor (NDD) which has the ability to detect and fix error on a disk. There were no error on any of the four disks. Next, I decided to hold the magnets near the disks for the experimentation for about 30 seconds at about the same place on the disk. I did so on all of the four disk. Then, I brought the disks home and tested all four of the disks in a disk testing and repair program called Norton Disk Doctor. I notices that each one of the disks suffered damage. Every one of the four disk was numbered. The Floppy with the weakest magnet was "Disk 1" and the Floppy with the strongest magnet was "Disk 4" respectively. This was done to avoid possible confusion in the disks. Result Every Floppy Diskette has 2874 sectors. This was calculated by dividing the total number of bytes on a disk by the number of bytes every sector occupies. There is a total of 1,457,664 bytes on every Floppy, and every sector occupies 512 bytes. Therefore, 512 / 1457664 is 2874, ie. the total number of sectors on every Floppy. After every Floppy had been tested, I noted all the results. The results were as follows: Disk 1: Total Bytes on Disk: 1,457,664 Total Bytes in Bad Sectors: 3584 Total Number of Sectors: 2874 Total Number of Bad Sectors: 7 Total Number of Good Sectors: 2867 Disk 2: Total Bytes on Disk: 1,457,664 Total Bytes in Bad Sectors: 5632 Total Number of Sectors: 2874 Total Number of Bad Sectors: 11 Total Number of Good Sectors: 2863 Disk 3: Total Bytes on Disk: 1,457,664 Total Bytes in Bad Sectors: 15360 Total Number of Sectors: 2874 Total Number of Bad Sectors: 30 Total Number of Good Sectors: 2844 Disk 4: Total Bytes on Disk: 1,457,664 Total Bytes in Bad Sectors: 19968 Total Number of Sectors: 2874 Total Number of Bad Sectors: 39 Total Number of Good Sectors: 2833 After the testing, I discovered that even the smallest of the Magnets could cause bad sectors and damage both, the disk and the data on the disk. Even thought the damage wasn't very big, it was big enough to corrupt any program on the disk, becuase every part of the present file would be necessary for its correct use and any bad sectors would almost destroy the file and make it worthless. Conclusion: In conclusion, this experiment proved that floppies are very sensitive to magnets and should not be brought near them at anytime. When the magnets were brought near the floppies, the disks were damaged and the weakest magnet caused the least damage and the strongest magnet caused the most damage. f:\12000 essays\technology & computers (295)\How the Internet Affects Us.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ How Technology Effects Modern America U.S. Wage Trends The microeconomic picture of the U.S. has changed immensely since 1973, and the trends are proving to be consistently downward for the nation's high school graduates and high school drop-outs. "Of all the reasons given for the wage squeeze - international competition, technology, deregulation, the decline of unions and defense cuts - technology is probably the most critical. It has favored the educated and the skilled," says M. B. Zuckerman, editor-in-chief of U.S. News & World Report (7/31/95). Since 1973, wages adjusted for inflation have declined by about a quarter for high school dropouts, by a sixth for high school graduates, and by about 7% for those with some college education. Only the wages of college graduates are up. Of the fastest growing technical jobs, software engineering tops the list. Carnegie Mellon University reports, "recruitment of it's software engineering students is up this year by over 20%." All engineering jobs are paying well, proving that highly skilled labor is what employers want! "There is clear evidence that the supply of workers in the [unskilled labor] categories already exceeds the demand for their services," says L. Mishel, Research Director of Welfare Reform Network. In view of these facts, I wonder if these trends are good or bad for society. "The danger of the information age is that while in the short run it may be cheaper to replace workers with technology, in the long run it is potentially self-destructive because there will not be enough purchasing power to grow the economy," M. B. Zuckerman. My feeling is that the trend from unskilled labor to highly technical, skilled labor is a good one! But, political action must be taken to ensure that this societal evolution is beneficial to all of us. "Back in 1970, a high school diploma could still be a ticket to the middle income bracket, a nice car in the driveway and a house in the suburbs. Today all it gets is a clunker parked on the street, and a dingy apartment in a low rent building," says Time Magazine (Jan 30, 1995 issue). However, in 1970, our government provided our children with a free education, allowing the vast majority of our population to earn a high school diploma. This means that anyone, regardless of family income, could be educated to a level that would allow them a comfortable place in the middle class. Even restrictions upon child labor hours kept children in school, since they are not allowed to work full time while under the age of 18. This government policy was conducive to our economic markets, and allowed our country to prosper from 1950 through 1970. Now, our own prosperity has moved us into a highly technical world, that requires highly skilled labor. The natural answer to this problem, is that the U.S. Government's education policy must keep pace with the demands of the highly technical job market. If a middle class income of 1970 required a high school diploma, and the middle class income of 1990 requires a college diploma, then it should be as easy for the children of the 90's to get a college diploma, as it was for the children of the 70's to get a high school diploma. This brings me to the issue of our country's political process, in a technologically advanced world. Voting & Poisoned Political Process in The U.S. The advance of mass communication is natural in a technologically advanced society. In our country's short history, we have seen the development of the printing press, the radio, the television, and now the Internet; all of these, able to reach millions of people. Equally natural, is the poisoning and corruption of these medias, to benefit a few. From the 1950's until today, television has been the preferred media. Because it captures the minds of most Americans, it is the preferred method of persuasion by political figures, multinational corporate advertising, and the upper 2% of the elite, who have an interest in controlling public opinion. Newspapers and radio experienced this same history, but are now somewhat obsolete in the science of changing public opinion. Though I do not suspect television to become completely obsolete within the next 20 years, I do see the Internet being used by the same political figures, multinational corporations, and upper 2% elite, for the same purposes. At this time, in the Internet's young history, it is largely unregulated, and can be accessed and changed by any person with a computer and a modem; no license required, and no need for millions of dollars of equipment. But, in reviewing our history, we find that newspaper, radio and television were once unregulated too. It is easy to see why government has such an interest in regulating the Internet these days. Though public opinion supports regulating sexual material on the Internet, it is just the first step in total regulation, as experienced by every other popular mass media in our history. This is why it is imperative to educate people about the Internet, and make it be known that any regulation of it is destructive to us, not constructive! I have been a daily user of the Internet for 5 years (and a daily user of BBS communications for 9 years), which makes me a senior among us. I have seen the moves to regulate this type of communication, and have always openly opposed it. My feelings about technology, the Internet, and political process are simple. In light of the history of mass communication, there is nothing we can do to protect any media from the "sound byte" or any other form of commercial poisoning. But, our country's public opinion doesn't have to fall into a nose-dive of lies and corruption, because of it! The first experience I had in a course on Critical Thinking came when I entered college. As many good things as I have learned in college, I found this course to be most valuable to my basic education. I was angry that I hadn't had access to the power of critical thought over my twelve years of basic education. Simple forms of critical thinking can be taught as early as kindergarten. It isn't hard to teach a young person to understand the patterns of persuasion, and be able to defend themselves against them. Television doesn't have to be a weapon against us, used to sway our opinions to conform to people who care about their own prosperity, not ours. With the power of a critical thinking education, we can stop being motivated by the sound byte and, instead we can laugh at it as a cheap attempt to persuade us. In conclusion, I feel that the advance of technology is a good trend for our society; however, it must be in conjunction with advance in education so that society is able to master and understand technology. We can be the masters of technology, and not let it be the masters of us. Bibliography Where have the good jobs gone?, By: Mortimer B. Zuckerman U.S. News & World Report, volume 119, pg 68 (July 31, 1995) Wealth: Static Wages, Except for the Rich, By: John Rothchild Time Magazine, volume 145, pg 60 (January 30, 1995) Welfare Reform, By: Lawrence Mishel http://epn.org/epi/epwelf.html (Feb 22, 1994) 20 Hot Job Tracks, By: K.T. Beddingfield, R. M. Bennefield, J. Chetwynd, T. M. Ito, K. Pollack & A. R. Wright U.S. News & World Report, volume 119, pg 98 (Oct 30, 1995) f:\12000 essays\technology & computers (295)\HOW THE INTERNET GOT STARTED.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ HOW THE INTERNET GOT STARTED Some thirty years ago , the Rand corporation , America's formost cold war think tank, faced a strange straegic problem. How could the US authrieties succesfully communicate after a nuclear war? Postnuclear America would need a comand-and-control network, linked from city to city , state to state, base to base . But no matter how throughly that network was armored or protected , its switches and wiring would always be vulnerable to the impact of atomic bombs. A nuclear attack would reduce any conceivable network to tatters. And how would the network itself be commanded and controlled ? Any central authority, any network central citadel, would be an obvious and immediate target for man enemy missle. Thecenter of the network would be the very first place to go. RAND mulled over this grim puzzle in deep military secrecy, and arrived at a daring solution made in 1964.The principles were simple . The network itself would be assumed to be unreliable at all times . It would be designed from the get-go to tyranscend its all times . It would be designed from the get-go to transcend its own unrreliability. All the nodes from computers in the network would be equal in status to all other nodes , each node with its own authority to originate , pass , and recieve messages. The messages would be divided into packets, each packet seperatly addressed. Each packet would begin at some specified source node , and end at some other specified destination node . Each packet would wind its way through the network on an individual basis.In fall 1969, the first such node was insalled in UCLA. By December 1969, there were 4 nodes on the infant network, which was named arpanet, after its Pentagon sponsor. The four computers could even be programed remotely from the other nodes. thanks to ARPANET scientists and researchers could share one another's computer facilities by long -distance . This was a very handy service , for computer-time was precious in the early '70s. In 1971 ther were fifteen nodes in Arpanet; by 1972, thirty-seven nodes. And it was good. As early as 1977,TCP/IP was being used by other networks to link to ARPANET.ARPANET itself remained fairly tightly controlled,at least until 1983,when its military segment broke off and became MILNET. TCP/IP became more common,entire other networks fell into the digital embrace of the Internet,and messily adhered. Since the software called TCP/IP was public domain and he basic technology was decentralized and rather anarchic by its very nature,it as difficult to stop people from barging in linking up somewhere or other. Nobody wanted to stop them from joining this branching complex of networks,whichcame tobe known as the "INTERNET". Connecting to the Internet cost the taxpayer little or nothing, since each node was independent,and had to handle its own financing and its own technical requirements. The more,the merrier. Like the phone network,the computer network became steadily more valuable as it embraced larger and larger territories of people and resources. A fax machine is only valuable if everybody eles a fax machine. Until they do, a fax is just a curiosity. ARPANET, too was a curiosity for a while. Then computer networking became an utter necessity. In 1984 the National Science Foundation got into theact,through its office of Advanced Scientific Computing. The new NSFNET set a blisteing pace for technical advancement linking newer,faster,shinier supercomputers,through thicker, faster links,upgraded and expanded,again and again,in l986,l988,l990.And other government agencies leapt in:NASA,National Institutes of Health,Department of Energy,each of them maintaining a digitl satrapy in the INTERNET confederation. The nodes in this growing network-of-networks were divided up into basic varieties. Foreighn computers,and a few American ones chose to be denoted by their geographical locations. The others were grouped by the six basic Internet domains --gov, {government} mil {military}edu{education} these were of course, the pioneers Just think, in l997 the standards for computer networking is now global. In 1971, there were only four nodes in the ARPANET network. Today there are tens of thousands of nodes in the Internet,scattered over forty two countries and more coming on line every single day. In estimate, as of December,l996 over 50 million people use this network. Probably, the most important scientific instrument of the late twentieth century is the INTERNET. It is spreading faster than celluar phones,faster than fax machines. The INTERNET offers simple freedom. There are no censors,no bosses,etc. There are only technical rules, not social, political,it is a bargain you can talk to anyone anywhere,and it doesnt charge for long distance service. It belongs to everyone and no one. The most widely used part of the"Net" is the world Wide Web. Internet mail is E mail a lot faster than the US Postal service mail Internet regulars call the US mail the "snailmail"File transfers allow Internet users to access remote machines and retrieve programs or text. Many internet computers allow any person to acess them anonymously to simply copy their public files,free of charge. Entire books can be transferred through direct access in a matter of minutes. Finding a link to the Internet will become easier and cheaper. At the turn of the century,Network literacy will be forcing itself into every individuals life. f:\12000 essays\technology & computers (295)\How to buy a computer.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Knowledge is Power Buying a personal computer can be as difficult as buying a car. No matter how much one investigates, how many dealers a person visits, and how much bargaining a person has done on the price, he still may not be really certain that he has gotten a good deal. There are good reasons for this uncertainty. Computers change at much faster rate than any other kind of product. A two-year-old car will always get a person where he wants to go, but a two-year-old computer may be completely inadequate for his needs. Also, the average person is not technically savvy enough to make an informed decision on the best processor to buy, the right size for a hard drive, or how much memory he or she really needs. Just because buying a computer can be confusing does not mean one should throw up his hands and put himself at the mercy of some salesman who may not know much more than he does. If one would follow a few basic guidelines, he could be assured of making a wise purchase decision. A computer has only one purpose; to run programs. Some programs require more computing power than others. In order to figure out how powerful a computer the consumer needs, therefore, a person must first determine which programs he wants to run. For many buyers, this creates a problem. They cannot buy a computer until they know what they want to do with it, but they cannot really know all of the uses there are for a computer until they own one. This problem is not as tough as it seems, however. The consumer should go to his local computer store, and look at the software that's available. Most programs explain their minimum hardware requirements right on the box. After looking at a few packages, it should be pretty clear to the consumer that any mid-range system will run 99% of the available software. A person should only need a top-of-the-line system for professional applications such as graphic design, video production, or engineering. Software tends to lag behind hardware, because it's written to reach the widest possible audience. A program that only works on the fastest Pentium Pro system has very limited sales potential, so most programs written in 1985 work just fine on a fast '486, or an entry-level Pentium system. More importantly, very few programs are optimized to take advantage of a Pentium's power. That means that even if the consumer pays a large premium for the fastest possible system, he may not see a corresponding increase in performance. Buying the latest computer system is like buying a fancy new car. One pays a high premium just to get the newest model. When the consumer drives the car out of the showroom, it becomes a used car, and its value goes down several thousand dollars. Similarly, when a new computer model comes out in a few weeks, his "latest and greatest" becomes a has-been, and its value plummets. Some people think that if they only buy the most powerful computer available, they will not have to upgrade for a long time. These people forget, however, that a generation of computer technology lasts less than a year. By computer standards, a two-year-old model is really old, and a three-year-old model is practically worthless. Sinking a lot of money into today's top-of-the-line computer makes one less willing (and less financially able) to upgrade a couple of years from now, when a person may really need it. Here's something else to consider. While a faster processor will usually increase the speed of a system, merely doubling the processor speed usually will not double the performance. A 133MHz Pentium system may only be 50% faster than 75 MHz Pentium system, for example. That's because there are a lot of other limiting factors. Memory is a prime example. One may be better off buying a 75MHz Pentium system with 16MB of RAM than a 133 MHz system with 8MB. Even if buying the top machine did double a machine's performance, however, it still might not make as big a difference as a person might think. If his software performs any given task in under a second, doubling its speed saves the consumer less than half a second. No products change as quickly as computers. Considering the pace of this change, it does not make sense to buy a computer today without planning for tomorrow. Every computer claims to be upgradeable, but there are varying degrees of expandability. A truly expandable unit has: At least two empty SIMM sockets for memory upgrades At least three empty expansion slots (preferably local-bus PCI slots) A standard-sized motherboard that one can replace with a newer model A large case with lots of room inside (I prefer the "mini-tower" design.) The last two items require a bit of explanation. The motherboard is the computer's main circuit board, which holds the processor(such as a '486 or Pentium chip) and the memory (RAM). Even if the consumer buys the fastest Pentium Pro system available today, at some point he is going to need to go to a faster processor. Some motherboards try to provide a way to add a faster processor later. The problem is, computer manufacturers do not really know what features computers will have two years from now. The best way to guarantee that he will be able to upgrade his processor, therefore, is to make sure that the consumer can replace the motherboard. A person might think that it would be very expensive to replace the motherboard, but actually, it can be a very cost-effective way to upgrade a computer. For example, a friend of mine had an old 25 MHz 386SX computer with 2MB of RAM. By current standards, this computer was almost too slow to use. I replaced the '386 motherboard with one containing an 100MHz '486 DX/4 processor for about $200, including installation. The resulting computer is fast enough to run any of today's software, and the price was a lot less than Intel charges for its "Overdrive" chips, which add a fast new processor to a current (slow) motherboard. The reason I was able to perform the upgrade so inexpensively is that the original computer had an industry-standard sized motherboard in a roomy mini-tower case. I just slid out the old motherboard, and popped in the new one, using the same graphics card, sound card, hard drive, floppy drive, and memory modules as the original machine. The result was a unit identical to the previous one, only ten times as fast. Unfortunately, upgrading is not always so easy. Many systems from "big-name" manufacturers such as Compaq, IBM, and Packard Bell, use proprietary motherboards and slim-line cases. The small size of these units makes them fit easily on a desktop, but does not leave much room inside for expansion.. These factors make the compact desktop units a nightmare to service and to upgrade. What is a buyer to do? He is to make sure that the computer he is to buy has full-sized case. Such a computer should be made up of individual components, each of which can be upgraded or replaced individually, and none of which costs more than about $200. This makes the unit easy to upgrade, and easy to service should something break later on. How does one make sure that the computer uses industry-standard parts, instead of some weird proprietary technology? One quick way is to look at the expansion slots on the back. If the computer is a desktop unit (one that is wider than it is tall), the slots should go up and down, perpendicular to the desk. If it's in a tower configuration (taller than it is wide), the slots should go left to right, parallel to the desk. The number of slots should be a tip-off. The right kind of case will have space for at least seven slots. Also, the consumer should look to see where the peripherals plug in. If there is a separate video card, for example, the monitor plug will be located on the rear bracket of an expansion slot. The more individual components a computer has, the easier it is to upgrade and replace them. Computer technology changes so quickly that it does not make sense to pay a high premium for the fastest system on the market. Today's speed demon is tomorrow's has been. If one is looking to get the best value for his money, look to the middle of the pack. Today, for example, Pentium systems go from the 75MHz systems on the low end to 133 MHz systems on the high end. The middle systems, the 100 MHz and 120 MHz systems, are where he will find his best buys. This situation will no doubt change as 150 MHz and 166 MHz systems are introduced, and the 100 MHz systems become the new low end. The aspect that will not change is the fact that he will get the best buy with a system that falls somewhere in the middle. Mid-priced computers cost only a little more than the "El cheapo" systems, but perform almost as well as top-of-the-line models. They will not become obsolete as fast as the cheapest computers will, but they'll still leave the consumer with enough money so that one feels comfortable upgrading in a couple of years. f:\12000 essays\technology & computers (295)\How to maintain a computer system.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ How to Maintain A Computer System Start a notebook that includes information on your system. This notebook should be a single source of information about your entire system, both hardware and software. Each time you make a change to your system, adding or removing hardware or software, record the change. Always include the serial numbers of all equipment, vendor support numbers, and print outs for key system files. Secondly periodically review disk directories and delete unneeded files. Files have a way of building up and can quickly use up your disk space. If you think you may need a file in the future, back it up to a disk. At a minimum, you should have a system disk with your command.com, autoexec.bat, and config.sys files. If your system crashes, these files will help you get it going again. In addition, back up any files with an extension of .sys. For Windows systems, all files with an extension of .ini and .grp should also be backed up. Next any time you work inside you computer turn off the power and disconnect the equipment form the power source. Before you touch anything inside the computer, touch and unpainted metal surface such as the power supply. This will help to discharge any static electricity that could damage internal components. Now you should periodically, defragment your hard disk. Defragmenting your hard disk reorganizes files so they are in contiguous clusters and makes disk operations faster. Defragmentation programs have been known to damage files so make sure your disk is backed up first. A good thing to do now would be to protect your system from computer viruses. Computer viruses are programs designed to infect computer systems by copying themselves into other computer files. The virus program spreads when the infected files are used by or copied to another system. Virus programs are dangerous because they are often designed to damage the files in a system. You can protect yourself from viruses by installing an anti-virus program. Lastly learn to use system diagnostic programs, if they did not come with your system, obtain a set. These programs help you identify and possibly solve problems before you call for technical assistance. Some system manufactures now include diagnostic programs with their systems and ask that you run the programs before you call for help. f:\12000 essays\technology & computers (295)\How to make a webpage!.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ For my science project I chose to create a web (internet) page, dealing with science. This project consists of using a computer and a html editor to create a page that can be found on the internet. The next paragraph will explain how to make an internet page. The steps to making a web page to post on the internet, is very easy. Most web pages are made in a code called html, which is what I am using to make my science web page. Html is an acronym for: Hyper Text Markup Language. The html codes are very easy to use, and remember. If you want to spice up your web page, you may want to use another code called, Java. The word Java is not an acronym, it comes from its maker, 'Sun Technologies', which is a tremendously huge company that deals with web page making and the internet. Java enables you to have those neat scrolling words at the bottom of your web browser, and the other neat moving things that you may find in web pages around the net. Another code to spice up your web page would be cgi. Cgi stands for: Common Gateway Interface, it is used to submit information on the internet. You can get a book at your local library that contains how to use html, java, and cgi. You now need to select one of the many programs that allow you to make a web page, using html, java, and cgi. Once you find this program, you may now start to enter your html, java, and cgi coordinates. After long hours of work you may now test your web page, depending on the program you are using, there is usually a button that you may press that enables you to look at the web page you have made. After revising and checking your web page, it is time to place it on the internet. To do this, you may have to contact your internet provider, and ask them if they allow their customers to place internet documents on their world wide web server. Once you have it on the net, tell all your friends about it so you can get traffic on your page, and maybe one day, you will win an award for it, and all that work will be paid off. Every time I make a web page for something or someone, I always learn new html, java, and cgi commands, because I always like to try new things, to see if they will work, or to see what they will do. When I made this web page, I learned how to do different things all at once, which I had never done before. Making web pages are fun for the people who are experienced with web page making, and for people who are very computer literate. f:\12000 essays\technology & computers (295)\How to make phones ring.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Samuri Presents "Makin' Fones Ring" Ok, this is easy. This is not realy phreaking, but still kind of phun. This only works on Bell Atlantic fones and pay fones. All right, to make a Bell Atlantic fone ring all you have to do is dial 811 then the last four digits of the numer from which you are calling. You will hear a dial tone as soon as you do this. Hang up for about 3 seconds then pick up again, you will hear a strange tone. Hang up, in 5 seconds the fone will ring. When someone picks up the fone they will hear the same tone you just heard. When the fone is hung up again it will reset to normal. You can do this with home fones AND pay fones. The phun part is IT WILL KEEP RINGING UNTILL SOMEONE PICKS UP. You can do this with your own fone and annoy your parrents or when you go over to someone's house. This is phun to do at places with rows of pay fones, you can get them all to ring at once. ----The f:\12000 essays\technology & computers (295)\How to Surf the Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ How to Surf the Internet The term "Internet," or "The Net" as it is commonly in known in the computer business, is best described as an assortment of over one thousand computer networks with each using a common set of technical transfers to create a worldwide communications medium. The Internet is changing, most profoundly, the way people conduct research and will in the near future be the chief source of mass information. No longer will a student have to rely on the local library to finish a research essay - anybody with a computer, a modem, and an Internet Service Provider can find a wealth of information on the Net. Anybody with a disease or illness and who has access to the Internet can obtain the vital information they are in need of. And, most importantly, businesses are flourishing at this present day because of the great potential the Internet holds. First of all, for a person to even consider doing research on the Internet privately they must own a computer. A computer that is fast, reliable, and one that has a great deal of memory is greatly beneficial. A person also needs a modem (a device that transmits data from a network on the Internet to the user's computer). A modem's quality and speed are measured as something called a baud rate (how fast the modem transmits data in bits and kilobits - similar to grams and kilograms). A kilobit is a term simply used to describe the speed of a modem. For example, if somebody was to go out and purchase a 2400 baud modem, they would be buying a modem that transmits data 2400 kilobits per second which is definitely not the speed of a modem you want if your thinking of getting onto the Internet. The speeds of modems then double in the amount of kilobits that can be transmitted per second going from 4800 baud to 9600 baud and so on eventually getting up to 28800 baud (which is the fastest modem on the market right now). To surf the Internet successfully, a person will have to own a 9600 baud or higher, and with recent advancements the Internet has offered, the recommended speed is a 14400 kilobytes per second modem. A modem ranges in price, depending on the type of modem you want, the speed you need, and if it is an external or internal type, modems range from as low as $20 to as high as $300. If a person is unequipped with a computer most local libraries and nonprofit organizations provide Internet access where research can be done freely. Having Internet access in libraries is extremely beneficial for citizens who do not have access to the Internet as it gives them a chance to survey the vast amount of information available on the Net. And it is absolutely true the Internet is evolving as the greatest tool for searching and retrieving information on any particular subject. Searching for information on The Internet using libraries and other nonprofit organizations can be a bit uncomfortable. For those people who already own a computer and a modem, and are ready to take hold of the highway of information the Internet provides, they might want to consider getting a commercial account with an Internet Service Provider or ISP (a company or organization that charges a monthly a fee and provides people with basic Internet services). Choosing your ISP may be the most difficult thing you must decide when trying to get on the Net. You must choose a service that has a local dial-in number so you do not end up with monstrous long distance charges. You must also choose an ISP that is reliable, fast, and has a good technical support team who are there when you're in trouble or have a problem. Typically, most ISP's charge around $25 to $30 per month and they allocate approximately 90 hours per month for you to use the service. You must be aware that even though there are some ISP's who charge only $10 to $15 per month for unlimited access, they may not meet up to your expectations; so it would be advisable to spend the extra $15 or $20 per month to get the best possible service. No matter how a person gets connected to The Internet, they will always be able to search for information about any topic that enters their minds. And it is the Internet that is changing the traditional methods of how people research specific topics. The tools that simplify the research processes make the Internet another invaluable method of obtaining information. Most people who already know how to surf the Internet properly have no trouble finding information quickly and logically. However, for new people who are just starting to use the Net, the process can be quite troublesome. Some of the tools used for searching the Internet include Electronic Mail or E-mail which is a Messaging system that allows you to send documents, reports, and facsimiles to users on the Internet. Every user on the Internet has their own E-mail address and can send messages to anyone as long as they know another person's E-mail address. One easy way of obtaining information about any topic is to join a mailing list where mail sent directly to one user will cause the information to be distributed to all members of that particular list. Mailing Lists are a fun and an easy way of gaining the important information a person may find on the Net. This also shows another way of how useful the Internet is and can be. Another way a person can gain information through Electronic Mail is by people exchange messages publicly over the Net and these messages are sorted into different areas called News groups or often referred to as Usenet News. There are currently over 13,000 news groups for which any user with access to the Net can use. People send and receive messages about what kind of topic the news group is devoted to and is an excellent way of gaining information quickly and easily. Usenet news is also a way to receive up to the minute information about timely topics. A further tool for exploring The Internet is a tool called gopher which is perhaps the most popular non-graphic way of searching the Internet. It provides interconnected links between files on different computers around the Net. Gopher provides access to an enormous amount of text files, documents, games, reference files, software utilities, and much more. Gopher is menu-oriented making it fun and easy to search for information because the only thing the user has to do is point and click. The World Wide Web is a lot like gopher in that the only difference is that it uses a mixture of text and graphics to display a wide assortment of information. The Web is one of the most effective methods to provide information because of its visual impact and multimedia foundation. Many search tools are available on the Web to help the user more easily search for materials that are of interest to him or her. There are some users who fret about having an information overload. They see themselves surfing a sea of random facts, information of varying quality, humour and entertainment references, people and places. The on-line world contains chaos as does the real world. Although some say the Internet World contains too much information for people to make sense of, there is tremendous proof people will find their place on the Internet with plenty of help. And everybody will grow up to make sense of the information available just as millions of users already do. f:\12000 essays\technology & computers (295)\How will our future be.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ HOW WILL OUR FUTURE BE? The way the future is heading seems to be very clear but as before things may change. The time to come will never reveal itself until it has actually been. From this point of view I will try to describe the way I see the future coming our way. One of the major aspects when discussing the future is how will the law be handled and how power will be dealt with. Will we be able to decide for us self what we want to do with our lives and will the right of every individual be respected, as written in the constitution. There is no way I could be forced to believe otherwise. Our society today is made to decide if every citizen in Denmark should have some sort of card that you used for multiple things. Your health-insurance, driver's licence, personal identification and many other things. Some people say that this is the beginning to the completely government controlled society where your every move is followed by the administration. The year is 2096. We are standing in the airport near Copenhagen. A lot of people are walking by with their net-agents. A small computer-program that has been trained to inform you on all the things that you find interesting. To identify themselves they have their citizen-card plugged into the device. An agent is calling our net-computer. He wishes to inform us about all the activities in Copenhagen today but of course only the ones he knows we might be interested in. The agents are a very handy invention which was created in the late nineties, by a small company called Micro-help. Nowadays everybody has on or more. The net-agents work 24 hours a day at the global, fibre-optical network. The network is so fast you never experience the bottleneck when transferring data like in the old days. This is a major advantage for the large number of people working from their own homes. They use a technology called net-meeting if they have to discuss some paperwork over the net. It is possible to both look at your colleague in the video-chatter and at the same time write in the word-processor while the other person is watching and commenting. The schools all over the world is using this technique to exchange information. There is also a separate net called cyber-net. This is even more advanced than the other net. It is the ultimate cyber-space where you virtually can be anywhere on the entire planet, and you can even visit the other planets in our solar system. This net is based on the sophisticated Virtual Reality Nirvana 3D technology. In spite of the wide spread of computers in all the layers of society it is only the really big companies and their employees that takes VRN-3D into use. Imagine a world which is like a movie where there only will be good things, No pain. In this society year 2096, every one almost is living in the cities from where they control their daily function. Even the farmers live in the city. They have given up the dirty work and started to maintain their acres and their stocks from computers. They have agents checking on the cattle 24 hours a day. They do this by a neurotic-implant called the CAT-Tracer. This implant can interface with the brain and thereby sense if everything is all right. The agents can even do the medical treatment if needed. In Copenhagen like every other major city it has stopped growing wider and begun to grow upwards and downwards. This small finesse secures the environment from being run over by bulldozers. Every time more settlements are needed they just builds another flat on top, because it is cheaper than digging under the big city. This means that there is no lack of residences. On the educational side of society there is a lot more to learn now than there were earlier. You have to be completely in to how a computer works and what its major potential is. The functionality of the programs. In other words you have to be an expert in this area. Furthermore you have to get an education in the field of the industrial oriented direction you may want to work with later on. The learning process is sped up by an implant for better perception and memorising. For years the human race thought that genetic manipulation was the way to a better race. Today we know that nature is much better at selecting the fittest. A lot of money is saved by not doing those extremely expensive experiments. The reason why we have selected to use implants is that such can be removed at any time and is not a positively last change as the genetic manipulation was. Generally I think humanity has finally learned not to repeat the tragedies of history. We have to work with and for the nature and understand that it is our "BIG BROTHER", whom it is watching out for us and our every little step. f:\12000 essays\technology & computers (295)\Httpwww CHANGE com.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ HTTP://WWW.CHANGE.COM Joe the mailman will no longer be coming to your door. You won't have to go pick up your newspaper in the bushes at 6:00am anymore. Libraries will be a thing of the past. Why is this all happening? Welcome to the information age. "You've got mail!" is the sound most people are listening to. No more licking stamps, just click on the "send" icon, and express delivery service will take on a whole new meaning. The future is here. Now, a mouse is better known as a computer device rather than a rodent. Surfing is being done over the Internet instead of at the beach. Games are no longer bought at toy stores, but are downloaded into our computers. All of this new technology sounds fascinating, but will it benefit more than it will hurt? Think about my opening sentence, catchy right? Well, think about it again. What is going to happen to good 'ole Joe? And those nice librarians, what about them? Will they be out of a job? Will they be forced to operate computers that are foreign to them? How do we as a society adjust to technological change? The answer lies in society's ability to effectively measure the costs and benefits of technological change. The rapid growth of technology brings with it a massive amount of hope, but also despair. Kids are growing up with computers. They are learning more and faster than other generations could. This is wonderful, right? Maybe not. Will computers deplete the social skills kids need to mature? Will being a member of America OnLine rather than a youth group prove to be helpful or the opposite? Our generation will need to lead this technological revolution in the right direction. We need to offset the obstacles in our path. We need to make sure the flow of change is going to be a positive one. The answer lies in our hands. We need to utilize the technology given to us, and make sure it is used in a positive sense. We need to take the Internet and the World Wide Web and rid it of its evils. We need to make sure terrorist secrets and bomb recipes are not being exchanged, and make sure educational tools are. We need to make the Internet a source to help find jobs, rather than a catalyst to replace them. These are the hardships we must get rid of. So what are we going to do about it? We need to educate everyone young and old, and make computer illiteracy a thing of the past. We need to maximize computer security to its fullest extent. Computers shouldn't replace jobs, but rather be a tool in them. Our generation is being handed great technology and we have to rid it of its flaws. This is what needs to be done to make technological change great. The possibilities suggested by technology is endless. There are numerous problems that arise from such powerful technology. However, with the number of smart minds out there, it is likely that these problems will find solutions and information technology will live up to its glamorous expectations. So, Joe the mailman can keep on delivering that mail, but maybe with a computer to help organize and make his deliveries quicker. The librarians can keep putting books on the shelf along with software and multimedia too. Welcome to the future. f:\12000 essays\technology & computers (295)\Human memory organization.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Human memory organisation. Human memory organisation, from the outside, seems to be quite a difficult thing to analyse, and even more difficult to explain in black and white. This is because of one main reason, no two humans are the same, and from this it follows that no two brains are the same. However, after saying that, it must be true that everyone's memory works in roughly the same way, otherwise we would not be the race called humans. The way the memory is arranged, is probably the most important part of our bodies, as it is our memory that controls us. I think that it is reasonable to suggest that our memory is ordered in some way, and it is probably easy to think of it as three different sections : short term, medium term, and long term memory. Short Term : This is where all of the perceptions we get come to. From the eyes, nose, ears, nerves etc. They come in at such a rate, that there needs to be a part of memory that is fast, and can sift through all of these signals, and then pass them down the line for use, or storage. Short term memory probably has no real capacity for storage. Medium Term : This is where all of the information from the short term memory comes to be processed. It analyses it, and then decides what to do with it (use it, or store it). Here also is where stored information is called to for processing when needed. This kind of memory has some kind of limited storage space, which is used when processing information, however the trade-off is that is slower than Short term memory. Long Term : Long term memory is the dumping ground for all of the used information. Here is where the Medium term memory puts, and takes it's information to and from. It has a large amount of space, but is relatively slow in comparison with the other kinds of memory, and the way that the memory is stored is dubious as we are all knows to forget things. There is quite a good analogy in Sommerfield (forth edition p24-p25). Short term memory is comparable to computers registers, medium term (Working memory) is like a volatile storage place for information, and long term memory is like hard disk storage. I think that this is quite a good way of describing our own memory hierarchy. It seems that when information is being processed, and then in turn stored, it is not being stored as raw information such as black, round etc., but is being stored as what we see. For example, if we see a red cup, we store the information about the cup together, i.e. it's red, how high it is, what shape it is. Now if we see a black cup, we still recognise that it is a cup, even though the colour has changed. Now, it is clear that if the small amount of storage capacity in short term memory did not pass on the information quickly to the working memory (medium term memory), then as new information comes in, the old information will be forgotten. Like wise, if working memory tried to store too much, with more being passed to it from short term, again there will be information loss. The way that memory gets around this problem, is not unlike that of structured programming. Here, tasks are divided into different steps (while, and if loops), so as the different tasks that are contained in one problem can be tackled be the short term memory in stages. This means that all of the related information is loaded in stages, the single task is solved, and the memory gets updated with the next task, until the whole problem is solved. This way of working, means that there is no need to load unrelated information at the same time, saving on time, and work that the memory has to do. f:\12000 essays\technology & computers (295)\Identification of designing a web page for your school.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Identification The Doha College is a coeducational school for students aged between 11 and 18 years. It is situated in the Arabian Gulf in the capital city of the State of Qatar. It provides a British Curriculum to student from over 40 different countries. Although the culture here does not resemble the European and Western culture, the environment of the classes are the same. The Doha College is one of the few schools in Qatar that represents the British system of teaching. The College's foundation was the result of many members of the British Community, including the British Ambassador, who became the first ever Chairman of the Board of Governors. The Qatar Government approved to the project and continues to give its full and appreciated support plus its encouragement. The Doha College was opened in September 1980. It was moved from their old school to the new one, located in the Salwa Road in April 1988. The college has now become one of the best schools in Doha. Over 650 students from all over the world are currently being taught in the school. Over a hundred students join the school every year. A lot of events take place around the school ,for example, fairs ,sport events and school parties. All of these events are published around school premises to tell the students about these events. These published document are found mainly on notice boards. The College publicises itself in many way. For example: College uniform. Word of mouth (rumours) Year Books Sports events School production Fairs and fund raisers Posters sent around schools College prospectus Add in logo on newspapers Communication between parents and school e.g. news letters and parents teacher meeting. The Task Much of the printed publicity of the College tends to be rather dull. Although the printed publicity of the College is often very informative, it doesn't really attract anyone to actually sit down and read it. The printed publicity is often printed in a laser printer and is photocopied about 20 times and is given to the class students. This publicity comes out in two colours, black and white which makes the document very unattractive. These publicity are very unoriginal. The letters and information given to the parents from the school can be particularly straight forward but it still contains to much information, normally known as a information over load. We have decided to concentrate on designing our own new publicity system for the Doha College. This system could include: Letters home to parents Poster advertising about the College News Letters Option Booklets School Prospectus Year Books These publicity systems have to be : Colourful Attractive Informative but not dull Eye catching Original Modern Something that the students or anyone won't forget about it. We have decided to produce a newsletter as my publicity system. We are going to design my own newsletter, mentioning all the events taking place around the school. For example, We could mention any sport events or sport fixtures taking place around the school. We could also mention the school fairs, or any other special activities that are taking place around the premises. To check if my newsletter is satisfying enough, we could hand them out to different people of different ages. We could take in consideration of all their different opinions and alter any changes needed in my new publicity system. We are going to compare our new publicity system (the newsletter), with the other newsletters, that are found in the college. We have looked at the newsletters that are sent around by the school, and as you may already know they are particularly boring! So we are going to make our publicity system:- a) Attractive:- It has got to make parents and students read it, if it is going to be unattractive, no one will even look at it. b) Interesting:- It has got to be interesting, to allow parents and students to read the publicity system at the same time, enjoy what they are reading. c) Original:- The newsletter will have to be original, because if it isn't parents and student will not be attracted to it, thinking it is going to be the same type of document again! Hardware: We have already identified, what our publicity system will look like. Now we must describe the different types of hardware We are going to use. Mainly we will be using DTP to work on the newsletter. 1. The Mouse:- The mouse is one of the most important tools to work in DTP. The mouse helps us to move around the document easily and allows us to transmit the movement of our hand to the computer. 2. The scanner:- The scanner will be a very important tool to work on in DTP. This is because, maybe we would want to incorporate our own pictures or photographs into our document. 3.The Printer:- The printer, whether it is an ink-jet or a laser printer, could be used to out put the document. Obviously to check, how are new publicity system looks like we are going to use the printer. 4.The Visual Display Unit:-The monitor should be as large as you can afford, because it will avoid eye strain. Also you might have to work with 2 screens at the same time, this is why we need a VDU. 5. The Digital Camera:- The digital camera may be used to take pictures of the school premises or photographs of the people in the school. We would prefer to use this sort of hardware to work on our publicity system, mainly the digital camera or even the scanner to produce our publicity system. The newsletter has to be attractive, that is why we are mainly going to use the hardware that will provide us with pictures. During the year we have already done some rough ideas and sketches on how our new publicity system will look like. Experimenting with Microsoft Publisher and Visio to do some rough designs on what our leaflet will look like. I personally believe this is good idea by getting us into the hand of working with DTP. Web Page Due to some research we have realised that we are not going to create a newsletter, but a web page. Identification of Web Page project: The Internet is quickly becoming the fastest medium of communications with over 60 million users world wide. Recently, Q-tel. introduced its own server for Internet access. While this project is new and the Qtel file server can hold a very limited amount of customers at one time, the service is very cheap (QR6 per hour) and has enjoyed a fair amount of success. The introduction of this technology to Qatar has provided people with a huge amount of information and more importantly, a chance to interact with people around the world through e-mail, newsgroups and homepages. This is where our project comes in. We plan to create a web page for the Doha College. Our plan will include using our knowledge of the Internet standard language, HTML, as well as a web authoring program. The one I have at present is Microsoft Front Page, but shareware versions of Hot Dog and other tools are available online for free. After creating the page we plan to publicise it through popular search engines such as 'Yahoo!' and 'Webcrawler'. Finally we will show the page to the IT department for evaluation. Once the page is up and running, heads of different departments. can write and update different material on their part of the page. This goes one step further than the Doha College's current publicity system because it will make the school known to Internet users around the world. Our first draft for this project will be on a web site that offers its users free web sites. We already have an account on this site which includes a free homepage (http//:www.geocities.com/sunsetstrip/alley/3321) and an email address which we have not used yet. The disadvantages of this is that Geocities is a huge web site that gets millions of hits a day, so access is slow and some times, during peak hours, impossible. Even though a web page at Geocities is initially free, upgrades for the page cost a lot of money over the long run. Things like memory upgrades, personalised voice greetings and Java applets are either hard to operate or have a monthly charge. This charge is tiny though when compared to the cost of getting an original .com URL. Something like HTTP://WWW.DOHACOLLEGE.COM would cost an initial fee of $100.00. Banners on the top web pages like Yahoo! Can cost up to ten times that amount per week. The advantages definitely outweigh the disadvantages though. A very small obscure web site gets visited by about ten people a day. With very high publicity inside the college and Doha in general, the Doha College homepage can guarantee at least triple that amount in one day from Q-tel users alone. Regular listing in all the top search engines is usually free. Things like counters are available from the web counter homepage at HTTP://WWW.DIGITS.COM. The counters count how many people visit a page. HTML text can be copied from one page to another through any browser. The latest versions of web authoring tools are becoming less expensive and are capable of many more tasks such as Java and background sound. The recent introduction of the new Qatar homepage (HTTP://WWW.QATAR-ONLINE.COM) and a serge in Internet users in the Gulf has made publicity easy. URLs of pages can be added at no cost to sites such as Qatar and Emirates-online. If we face a problem with any aspect of building the web page, online help is vastly available, with thousands of pages dedicated to help on one aspect alone. The web page construction will also be influenced by feedback from visitors through email. We will need a lot of software to make the web page complete. The first thing we will need is an Internet connection with a server. We have one with Q-tel. which includes access to the Internet (QR6.00 an hour) and an email account (akkad@qatar.net.qa). The Q-tel. account can be very slow at times, and if a lot of users connect at one time the server may overload. The dial-up screen of Q-tel. is not very big on security. Many people have managed to hack into the password files. But Q-tel. is improving and access is much quicker than it was three months ago. The second thing we need is the latest version of a popular browser such as Netscape Navigator 3.0+ or Microsoft Internet Explorer 3.01+. Browser models are being upgraded all the time with Both 4.0 Versions of I.E. and Navigator being released soon. The capabilities of the best browsers include Java and ActiveX controls which allow animation and other multimedia to be viewed on the Internet. They can also view the HTML source of documents on the Internet which can be copied onto other sites. Plug-ins are available for all browsers and platforms which can make the Internet more dynamic with CD-quality sound and movies that can be played with Plug-ins such as Shockwave and RealAudio. Most browsers come with mail and news programs to send and receive mail and post to and read Newsgroups. f:\12000 essays\technology & computers (295)\Identity theft speech.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Boo! Are you scared? You should be, you see I'm a ghost and everyday I tap in to the information cyber world. And everyday I have access to you. Worse yet I could be you. According to the secret service approximatly one half a billion dollars is lost every year to identity theft online. What people don't seem to realize is that the internet world is just like any other community. so it's safe to assume the cyberworld would act as any natural community would with entrapranaurs, vast corperations, little guys selling warez, doctors visiting patients in their cyber offices, church organazations, and cyber crime as well as cyber criminals.With so many new users signing on everyday the cyber world has become a tourast trap so to speak where nameless faceless conartists work the crowds. Ghosts.Anybody can fall victem to a ghost. Nobody is really truely safe because our personal information is distrubted like candy on halloween. We use our social security numbers as identification numbers Credit card num bers are printed on every receipt. And our liscense number is used on a daily basis. Despite this there are ways to prevent yourself from falling victem to identity theft. you see a criminal likes easy prey. They don't want to have to work for it. It's like locking your car at the mall, sure someone might break in anyway, but chance are if your doors are lock the will probably move on to another car. First off... Never give your credit card number online out unless you are positive that the company you are dealing with is legitimate and reputable. If you aren't sure call the better burough of business. Never give out your social security number unless you absolutly have to, the only times you are legally obligated to give out your social security number is when you are requesting government aid of some kind or for employment reasons. Also I have information packets that I will hand out reguarding a company that for a small cost has information about everybody. The packets have detailed informatio on how to have your name and your family members names removed from their data base system.Now you might be thinking " Granted I can see why you wouldn't want to give out your credit card numbers but What could actually be done with my social security number?" Everything. This is your most vital information. Say I were a cybercriminal. Say I came across your social security number while perusing the school database. With your social security number I can obtain information about you through the school by oh requesting a transcript for instance. Later I could sign on to my annonomous account and fill out an application for american express and maybe a master card and oh I could use a new beeper to, this one is shot.By changing your adress to a PO box or an abandoned apartment box. I could pick up my new cards and legitamatly become you. I could also take it another step and request a new birth certificate, because It's not at all dificult to find out where you were born because the service I mentioned e rlier has all of this information for me. I can even get a photo license with your information. Scared yet?So I charge you thousands of dollars and you don't even know it until you try to take out a loan for your daughters new car. Granted in most cases you might not have to pay for the monitary damage directly but it will take you years to fix your credit. This is identity theft.. As I said earlier anybody can become a victem. But now you ave vital information that could prevent you from becoming one. So Never give out your information unless you have absolutly have to. Do yourself a favor and do your transactions in person. The information cyberworld is a wonderful place to visit, but just like in tiajauna don't let the little mexican guy sell you a gold necklace for $80 bucks and don't fall pray to the ghosts. f:\12000 essays\technology & computers (295)\IMPLEMENTING A CAD SYSTEM TO REDUCE COSTS.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ IMPLEMENTING A CAD SYSTEM TO REDUCE COSTS Introduction This report will analyze a proposal on how Woodbridge Foam could become more competitive through improvements in technology. This includes the saving of the companies money, shortening the design time for new products, decreasing quoting time and improving quality overall. By implementing a company wide CAD system, which would be networked together with each customer and all plants, these improvements could be achieved. Research will include interviewing various employees as to how business is done and what influences the winning or loosing of a contract. Research will also include study of both customer and competitors systems. Project Scope & Current Evaluation Goals Supported by CAD Initiative: In converting to a completely independent CAD system, there are a few aspects of operation which would be greatly improved. The first of the improvements would be the elimination of paper communication. The need to transfer large drawings using mylars would cease to be, thus helping provide a paper less environment. Another improvement as a result of CAD would be that of achieving much tighter tolerances in building new products. Using a CAD system, part designs could be received in an electronic format such as a math model. These models are currently in use by customers such as GM, BMW and Mercedes. The effect of having math models of all new products would enable a quicker turnaround in both quoting and production of products. CAD Vendors & Hardware Suppliers: Upon observing the various systems used by several customers and suppliers, the major CAD vendors worth consideration have been identified. Manufacturers of high quality workstations which have been distinguish are: Hewlett Packard (HP) IBM Silicon Graphics (SGI) SUN Premium, fully functional CAD solutions are: CATIA (Dassault / IBM) Computervision (Computervision / TSI) SDRC (SDRC / CAD Solutions) Unigraphics (EDS) Current System Description Success Factors: In implementing a new, otherwise foreign system into an established habitual way of doing things, there are several success factors which must be examined. If these factors are carefully thought over, a favorable shift from old to new may be obtained. Some critical success factors are as follows: Vendor availability - Will the chosen system supplier be readily available for technical support? Product engineering acceptance - Will those who are set in their ways be willing to abandon their habitual manner of operating? Training - Thorough training of all related employees must be completed before introduction of the new system. Data management - A new manner of recording all vital information must be established and proper procedures documented. Customer interface - Will the chosen system be compatible with those used by our customers and will needed data be easily convertible? Company Weaknesses: Currently, there are many aspects of our situation which present problems in coping with changing times, which in turn affect the development of technology. Some weaknesses in the company which curtail our affiliation with the developmental progress of our customers and suppliers are: We cannot easily accept electronic data; We must deal in manual drawings; We have many copies of similar drawings; We have multiple ECN levels; We have minimal CAD knowledge; We must perform manual volume calculations. Threats to Business: If procedures are not taken in order to improve on the present company weaknesses, there are bona fide threats which could potentially harm future progress and business. Once the weakness in the company have been effaced, the following threat to our business may be eliminated or greatly reduced. The immediate threats are: Suppliers may assume the design role; Competitors able to accept electronic input; No business with new products; Deterioration of communications; Lost productivity Process Description: As in most large corporations, our process generally follows a standard order of operations. There are several departments or areas which have functions. Based on the function of a department or area, a focus area is established and followed. Department/Area Function Focus Area Customer Designs seat Product Engineering Designs tool to manufacture seat Supplier Builds tools and supplies components needed to manufacture and construct seat Product Evaluation & Costing Costs seat based on foam and components used, manufacturing costs and assembly Purchasing Locates seat component suppliers and oversees development and manufacture of components Plant Manufactures and assembles seat Quality Control Ensures that products meet our own and customer standards Sales / Marketing Processes orders and manages overall customer relationships New System Requirements CAD System Requirements: The CAD system which is chosen must be capable of performing several specific tasks. In order for a new system to be of any use to the company and an aid to its advancement, it must present an improvement in various areas. Some of the short term requirement of a new CAD system are: Capable of 3D modeling including solids; May be used for simple or complex drafting applications; Suited to quickly perform volume calculations; Apt to translate various forms of math data. Product Evaluation & Costing (P.E.C.) Requirements: With respect to all the various areas of the company, the role of the P.E.C. department is one of the most important in the area of profit. Once the costing department receives a part request from a customer, it is the responsibility of the costing department to ensure that the life cycle of the part development is managed cost efficiently. When a current product undergoes an engineering change, it is the responsibility of the Costing team members to note the changes. The product must be re-costed, accounting for variances in foam and components. If an increase in foam is noted, the change must be calculated. Using manual calculations, the new part volume is derived and the customer is charged accordingly. Because foam variances are obtained manually, customers may at times, not be fully charged for the added cost of foam. Using a CAD system to perform a volume calculation, the answer would be definitive. The time needed to ship a print is approximately two days. If math models of products were sent via E-mail, the information needed by the costing department would be obtained two days earlier. Once complete, a costing package would in turn, arrive at a plant, also two days earlier from costing. In effect, a total of four days could be eliminated from the time needed to begin manufacturing a product. Solution Evaluation & Recommendation Benefits of CAD System In utilizing a CAD system, there are many areas of operation which are directly or indirectly affected. Because of the speed and accuracy with which a professional CAD system operates, time, and thus money, may be saved. Potential CAD project benefits include: Improved accuracy in quotes and design; Reduction in copying and courier costs; Faster and more accurate calculations of complex volumes; Management of expanding drawing database; Improved electronic communication with customers and suppliers. Recommended Vendor/Supplier Based on thorough presentations made to executives of Woodbridge Foam by each candidate and the penetration of these amongst key Woodbridge customers, it is recommended that Unigraphics be implemented as the solution. The Unigraphics system is currently used b 40% of Woodbridge customers. This system is also capable of performing all of the previously mentioned tasks such as 3D modeling, drafting, volume calculations and translating different forms of math data. Justification of CAD & Unigraphics CAD justification includes: Elimination of Mylars; Encouragement of a paper-less environment; Reduction in copy and reproduction costs; Reduction in courier costs; Faster and more accurate part volume calculations. Unigraphics justification includes: Used by key customers such as Chrysler and GM; Ability to convert data used by all customers; Extra commitment and availability for technical support; Extensive research into company prior to presentation. Work Station Cost: One time costs for one Workstation: Unigraphics Software License $30 000 Hewlett Packard Workstation $45 000 EDS Assistance (Assessment/Help) $ 5 000 Training (UG Education) $10 000 Consulting Assistance $ 7 500 Printer and Plotter $30 000 Hummingbird/Exceed PC Access Software $10 000 One time total costs $137 500 Annual Maintenance Costs: $3 750 Cost Reductions: As previously mentioned, the implementation of a CAD system will reduce costs in several areas. By eliminating the need for physical prints, the cost of reproducing and shipping prints will be eliminated. Some potential cost reductions in dollars are: Prints: 35,000 Mylars: 75,000 Courier: 5,500 Travel: 16,000 Plants (saved travel): 90,000* Productivity Improvement 75,000* TOTAL SAVED: 296,500 Productivity Improvements: There are some improvements in productivity which do not present a monetary value. These improvements however; will benefit the company and customer relations. These non-monetary productivity improvements are: Improved accuracy; Improved customer satisfaction; Support for higher tolerance of products; Improved on-line access to information; Improved internal communication between Woodbridge departments. Conclusion: As advancements in technology continue to be the norm, it is essential that those who wish to remain competitive, adhere to these advancements. In the case of the Woodbridge Foam Corporation, maintaining and equal standing with technological advancements will allow for improvements in the company as a whole. Cost saving may be incurred in the areas of print and courier costs; while the need for paper transference is eliminated. Tolerances, quoting time and an overall improvement in quality will in turn improve the satisfaction of our customers. Because of these advancements in technology within the company, the saying "a satisfied customer is a return customer" may be brought to life. 4 4 f:\12000 essays\technology & computers (295)\Improvements to the School Districts Local Area Network.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Introduction This evaluation of our school district's network of personal computers will closely examine the current system and identify potential improvements to this system. This evaluation will be of the administrative departments in the school district which are handled separately from the educational departments. Logistics of the Network The logistics of the computer systems currently in use break out as such: Approximately 500 personal computers, 21 file servers, a Digital VAX, an Ethernet network running at 10 megabits that spans 52 buildings with wide area network links running at 56 kilo bits per second to 1.55 million bits per second. The primary operating system in use is Microsoft Windows 3.11, but Windows 95 is being phased in to become the primary workstation operating system. Novell Netware 3.12 is the primary network operating system, but Windows NT is being phased in to become the primary network operating system. Each personal computer in use is utilized, for the most part, by only one user. This means that each system has a standard configuration at the system level, but at the GUI level the users are free to set up their environment however they wish. Virtually every administrative employee in the School District has a personal computer on their desk that they need in order to perform their assigned tasks. Each user of the network is processing mainly text based documents. In some cases graphics are being imbedded into documents to increase the professional look of the document but this use minor use of graphics would not be considered desk top publishing. Since the school district's departments are separated by function, there is no need for video teleconferencing and it is not being used. Evaluation of System Configuration Currently each building is handled as if it were a separate organization. Each site has it's own server, and all of the accounts are stored on the local server. Also everyone of the applications in use are stored on each client's workstation. This configuration is excellent for fault tolerance because each server can operate without the presence of any other server. a. Servers The servers were evaluated for manufacturer service support, available disk space, Random Access Memory present, processor speed, built in fault tolerance, and the type of network interface card being used. The server's in use are actually personal computers that have had additional memory and larger hard drives installed so that they could be used in a server capacity. The manufacturer of the computers being used as servers is Gateway 2000. This companies service support is handled through phone and mail only. If there is a hardware problem with a component of the server, it will take at least 24 hours to receive a replacement part. This turn around time is not acceptable because as many as 200 employees would not be able to work while the server that they use is down. The processors that are in use are Intel Pentium 90 MHz processors. This processor speed is adequate for the average demands being put on the server. This speed could become a problem with upgrades to different network operating systems, but for current utilization the processors are adequate. This is supported by the fact that average server utilization is 14% of the servers capability and by the fact that the servers are used as file servers only. Built in fault tolerance on the servers is non-existent. The server configuration has only one hard drive, one controller card and usually no tape backup. If any component of the server fails, then the server will be down. The only backup being done on any server is from one tape backup unit on the main server is the MIS department. This is not effective because not all servers can be backed up every night. A bi-weekly or monthly backup may be the only backup available for any server. This is an extremely weak area in this network. The network interface cards being used are SMC 's 32 bit Peripheral Component Interconnect (PCI) cards. These cards have a lifetime warranty and are replaced within 24 hours by the manufacturer. This type of card is adequate for the traffic being put on them by the users of the network. Current network speed only reach up to 10 million bits per second, and these cards support that very well. A nice feature of this type of cars is that it can be ordered with the BNC, thin net type of connector or the 10-base-T twisted pair connector. This allows greater flexibility when implementing their use. b. Client "IBM Compatible" Personal Computers The systems being used by the school district vary greatly from age, speed, and configuration. The average system being used is a 486 66 MHz Gateway PC with 8 megabytes of RAM and a 540 megabyte hard drive. There are also some 386-25 MHz computers still being used, but they are being replaced with Pentium 100 MHz systems. It would be impossible to examine each systems configuration and include it in this report but the districts standard configuration which is being implemented has been included. This represents the software being put on the systems and where they are stored. The Configuration of the district's personal computers follows. I. Operating System 1. Microsoft Windows 95 II. Default Applications. 1. Microsoft Office Version 7.0 a. Microsoft Word 7.0 b. Microsoft Access 7.0 c. Microsoft Excel 7.0 d. Microsoft Presentation 7.0 e. Microsoft Scheduler 7.0 2. Word Perfect Version 6.1 3. Quattro Pro Version 5.1 4. Insync Co-Session Remote Version 7.0 5. Reflections Version 5.1 6. F-Prot Professional for Windows 95 Version 2.22.1 III. Protocols 1. Microsoft IPX/SPX 2. Microsoft Netbeui 3. Walker, Richie and Quinn's LAT protocol Version 4.03 IV. Clients 1. Microsoft Windows Client 2. Microsoft Netware Client V. Installed Printer Drivers 1. Hewlett Packard Laser Jet 2. Hewlett Packard Laser Jet Series II 3. Hewlett Packard Laser Jet 4/4M Plus VI. Network Interface Card 1. Hewlett Packard Ethertwist Plus (27245B) VII. Display configuration 1. Resolution and Refresh rate. a. Super VGA 640 X 480 b. 75 Hertz I. Windows 95 Environment Configuration 1. Auto Arrange: On 2. Accessibility Options: Off 3. Time Zone: Mountain 4. Screen Saver: Flying Windows (10 min delay) 5. Background: Blue Rivets 6. Installation Type: Typical 7. Desktop Icons: a. Recycle Bin b. Microsoft Internet c. My Computer (User Specific) d. Network Neighborhood (School District ) e. Microsoft Network f. Word Perfect Shortcut g. Quattro Pro Shortcut h. Microsoft Word Shortcut i. Reflection Shortcut (District VAX) j. My Briefcase 7. Toolbar Icons a. F-Prot Dynamic Virus Protection b. STB Vision or ATI 8. Microsoft Office Professional Toolbar II. Default Application Configuration 1. Microsoft Office Version 7.0 a. Microsoft Word Version 7.0 1. Default File Path: C:\mydocu~1 (C:\My documents) 2. Timed Backup: 10 Mins 3. Backup Location: C;\mydocu~1 b. Microsoft Access Version 7.0 1. Default File Path: C:\mydocu~1 (C:\My documents) 2. Timed Backup: 10 Mins 3. Backup Location: C:\mydocu~1 c. Microsoft Excel 1. Default File Path: C:\mydocu~1 (C:\My documents) 2. Timed Backup: 10 Mins 3. Backup Location: C:\mydocu~1 d. Microsoft Presentation 1. Default File Path: C:\mydocu~1 (C:\My documents) 2. Timed Backup: 10 Mins 3. Backup Location: C:\mydocu~1 e. Microsoft Scheduler 1. No custom settings made. 2. Word Perfect 6.1 a. Default File Path: C:\mydocu~1 (C:\My documents) b. Timed backup: 10 mins c. Backup Location: C:\office\wpwin\wpdocs d. Application Location: C:\office\wpwin 3. Quattro Pro a. Default File Path: C:\mydocu~1 (C:\My documents) b. Timed backup: 10 mins c. Backup Location: C:\office\qpw d. Application Location: C:\office\qpw 4. Insync Co-Session Remote Version 7.0 a. Protocols Supported 1. SPX 2. Netbeui b. Only Host Installed 5. Reflections Version 5.1 a. Connection: via LAT b. Static Host List: 1. CSPS01 2. CSPS02 3. CSPS03 4. CSPS04 c. Color: PC Default 2 d. Settings File: C:\rwin\settings.r2w e. Key Remap: VT => PC Keyboard F1-F4 keys f. Runs in a maximized window 6. F-Prot Professional a. Floppy A: protection: Disinfect/Query b. Floppy B: protection: Report Only c. Fixed Disk C:\ protection: Disinfect/Query d. Network Drives: Report Only e. Dynamic Virus Protection (DVP): Disinfect/Query 1. Scan first full 1 MB of memory 2. Run in minimum amount of memory 3. No schedule set for full scan III. Default Protocols 1. Microsoft IPX/SPX Compatible Protocol a. Set as the default protocol b. Auto configures to 802.2 or 802.3 2. Microsoft Netbeui 3. Walker, Richie and Quinn's LAT Protocol a. Static Host List 1. CSPS01 2. CSPS02 3. CSPS03 4. CSPS04 IV. Clients 1. Microsoft Windows Client a. Not set to log into a domain 2. Microsoft Netware Client a. Preferred Server is local server (disabled on NT clients) V. Installed Printer Drivers 1. Hewlett Packard LaserJet 2. Any local printer drivers VI. Network Interface Card 1. Hewlett Packard Ethertwist a. Interrupt Request: 10 b. Input / Output Base Address: 330 c. Set to 16-Bit Real Mode Driver (To support WRQ LAT) VII. Display Configuration 1. Set to PC local display driver 2. Set to 640 X 480 Resolution 3. Set to 75 Hz Refresh Rate 4. Set to Large Icons Comments on Evaluation As the systems were being evaluated, it was apparent that the systems are to be self sufficient and almost completely independent of the server. Again, for fault tolerance reasons this is a good decision. This means that if one of the servers were to go down the only effect on the workstation would be that there wouldn't be any file sharing available and shared printing could not be done. These two factors would not prohibit employees from getting work done effectively. It would add some inconvenience, but the employees could still function. The choice of Windows 95 as the operating system was based on the fact that the computers being used were IBM compatible which would demand an IBM compatible operating system. Also the users of the PC's would mostly being using the computer for one or two applications that were not processor demanding. Also, Windows 95 is superior to Windows 3.11 in maintainability, security ,and multitasking. It would seem that an operating system such as Windows NT would be to powerful and to costly to implement. Also, OS/2 would be to powerful and is not as compatible as Windows 95 is with DOS based applications. Therefore, it seems that Windows 95 was a good choice for this type of environment. It is also apparent that the systems have been configured to be managed and repaired remotely with the application Co-Session Remote. This application is configured to allow a workstation to be remotely controlled by a system administrator from a PC on the same network. This application has been configured for use over IPX/SPX and Netbeui which means that the connection would be very fast. So, instead of using dial up connections at 28.8 kbps the system can be controlled at 10 mbps which is significantly faster. One weakness of this configuration is the necessity to load drivers in Real Mode instead of the 32 bit mode of Windows 95. This is necessary because this system must connect to a VAX using the LAT protocol and the LAT protocol runs only in the 16 bit real mode. This limitation does not significantly slow down the workstation, but it does cause communication to be slightly slower with the server. As soon as the LAT protocol is upgraded to allow it to run, the faster environment the configuration of all Windows 95 based machines should be upgraded. Potential Improvements After performing an in depth study of the systems being used by the school district, the issue that needs most attention is data backup. Currently there is not a routine procedure in place to safeguard the districts data. This should be a major concern and steps should be taken to resolve this problem before a disaster occurs. Additionally, the use of real mode network drivers needs to be fazed out as soon as possible. The users currently do not see degradation in performance but as their applications become more network intensive the problems also will become greater. Outside of the backup problem and the real mode drivers all other critical areas have been sufficiently addressed to give the users a robust system that can be easily upgraded and managed. Conclusion The school district's personal computer network is one that is used to provide employees with a means to compile, process and disseminate information that is relevant to business operations. Currently the primary type of information being processed is text based with some use of imbedded graphical images. There is no other medium being used such as video teleconferencing being utilized over the network. The district is currently in the process of providing employees with Internet access to their desktop which is used for such activities as funds acquisition, consulting State bid lists and personal e mail. The support for these 500+ systems comes from only support one professional that has to support over 50 separate buildings. The result is that the district needs systems that are fast, reliable, inexpensive, low maintenance and have the ability to communicate with many other personal computers and servers. This evaluation found that the district is not at the level that it needs to be but steps in the right direction are being made to get there. The computers have good software configurations and most users have all of the hardware they need to perform their job functions. If the district can acquire more personnel to support this network and come up with a routine backup plan then the users of the network could continue to support the school district effectively with the use of this well designed technology. f:\12000 essays\technology & computers (295)\Improving Cyberspace.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Improving Cyberspace by Jason Crandall Honors English III Research Paper 26 February 1996 Improving Cyberspace Thesis: Though governments cannot physically regulate the Internet, cyberspace needs regulations to prevent illegal activity, the destruction of morals, and child access to pornography. I. Introduction. II. Illegal activity online costs America millions and hurts our economy. A. It is impossible for our government to physically regulate cyberspace. 1. One government cannot regulate the Internet by itself. 2. The basic design of the Internet prohibits censorship. B. It is possible for America to censor the Internet. 1. All sites in America receive their address from the government. 2. The government could destroy the address for inappropriate material. 3. Existing federal laws regulate BBS's from inappropriate material. III. Censoring the Internet would establish moral standards. A. Pornography online is more harsh than any other media. 1. The material out there is highly perverse and sickening. 2. Some is not only illegal, but focuses on children. B. Many industries face problems from illegal activity online. 1. Floods of copyrighted material are illegally published online. 2. Innocent fans face problems for being good fans. IV. Online pornography is easily and illegally accessible to minors. A. In Michigan, anyone can access anything in cyberspace for free. 1. Mich-Net offers most of Michigan access with a local call. 2. The new Communications Decency Act could terminate Mich-net. B. BBS's offer callers access to adult material illegally. 1. Most BBS operators don't require proof of age. 2. Calls to BBS's are undetectable to a child's parents. V. Conclusion. Improving Cyberspace "People don't inadvertently tune into alt.sex.pedophile while driving to a Sunday picnic with Aunt Gwendolyn" (Huber). For some reason, many people believe this philosophy and therefore think the Internet and other online areas should not be subject to censorship. The truth is, however, that computerized networks like the Internet are in desperate need of regulations. People can say, do, or create anything they wish, and as America has proved in the past, this type of situation just doesn't work. Though governments cannot physically regulate the Internet, cyberspace needs regulations to prevent illegal activity, the destruction of morals, and child access to pornography. First, censoring the online community would ease the tension on the computer software industry. Since the creation of the first computer networks, people have been exchanging data back and forth, but eventually people stopped transferring text, and started sending binaries, otherwise known as computer programs. Users like the idea; why would someone buy two software packages when they could buy one and trade for a copy of another with a friend? This philosophy has cost the computer industry millions, and companies like Microsoft have simply given up. Laws exist against exchanging computer software; violators face up to a $200,000 fine and/or five years imprisonment, but these laws are simply unenforced. Most businesses are violators as well. Software companies require that every computer that uses one of their packages has a separate license for that software purchased, yet companies rarely purchase their required minimum. All these illegal copies cost computer companies millions in profits, hurting the company, and eventually hurting the American economy. On the other hand, many people believe that the government cannot censor the Internet. They argue that the Internet is an international network and that one government should not have the power to censor another nation's telecommunications. For example, American censors can block violence on American television, but they cannot touch Japanese television. The Internet is open to all nations, and one nation cannot appoint itself police of the Internet. Others argue that the design of the Internet prohibits censorship. A different site runs every page on the Internet, and usually the location of the site is undetectable. If censors cannot find the site, they can't shut it down. Most critics believe that America cannot possibly censor the Internet. Indeed, the American government can censor the Internet. Currently, the National Science Federation administers all internet addresses, such as web addresses. The organization could employ censors, who would check every American site monthly. Any site the censors find with illegal material could immediately lose their address, thus shutting down the site. Some might complain about cost, but if the government raised the annual price to hold an address from a modest $50 to say $500, they could easily afford to pay for the censors. This would not present a problem, because mostly businesses own addresses; it would not effect use by normal people. For example, microsoft.com is the address for Microsoft, but addresses like crandall.com just do not exist. Bulletin Board Systems (BBS's) are another computer media in need of censorship. Like the Internet, some spots contain hard core pornography, yet some have good content. Operators usually orient their BBS's for the local community, but some operators open their system to users across the world. The government can shut down a BBS if it transfers illegal material across a state border according to federal law. As a postal worker in Tennessee showed, shutting down a BBS with illegal pornography is an easy process. When he called a BBS in California and found illegal child pornography, he called his local police. Two days later the police had closed the BBS and Robert Thomas was awaiting prosecuting in a Tennessee jail (Elmer-Dewitt). If the government were to employ censors like that postal worker, thousands of BBS's transmitting illegal material across state borders could be shut down immediately. Secondly, censoring cyberspace would help establish moral standards. According to a local survey, 83% of adults online have downloaded pornographic material from a BBS. 47% of minors online have downloaded pornographic material from a local BBS (Crandall). In another world wide survey, only 22% of 571 responders thought the Internet needed regulation to prevent minors from obtaining adult material (C|Net). Obviously, something is wrong with America's morals. A child cannot walk into a video store and walk out with X-rated movies. A minor cannot walk out of a bookstore with a copy of Playboy. Why can children sit in the privacy of their home and look at pornographic material and we do nothing about it? It is time America does something to establish moral standards. Certainly, people accepted the fact that pornography exists many years ago. In addition, however, they set limits as to how far pornography could go, yet cyberspace somehow snuck past these limits. Just after the vote on the Exon bill, Senator Exon said "I knew it was bad, but when I got out of there, it made Playboy and Hustler look like Sunday-School stuff" (Elmer-Dewitt). He was talking about the folder of images from the Internet he received to show the Senate just before the vote. An hour later, the vote had passed 84 to 16. Demand drives the market, it focuses on images people can't find in a magazine or video. Images of "pedophilia (nude photos of children), hebephilia (youths) and what experts call paraphilia -- a grab bag of 'deviant' material that includes images of bondage, sadomasochism, urination, defecation, and sex acts with a barnyard full of animals" (Elmer-Dewitt) floods cyberspace. Some wonder how much of this is available, a Carnegie Mellon study released last June showed that the Internet transmitted 917,410 sexually explicit pictures, films, or short stories over the 18 months of the study. Over 83% of all pictures posted on USENET, the public message center of the Internet, were pornographic (Elmer-Dewitt). What happened to our Information Superhighway, is this what we are fighting to put into our schools? Furthermore, illegal material other than pornography is making its way online. When companies such as Paramount and FOX realized they were loosing money because they were not online, they took action. They realized that people make money online just like they do on television. Several people make fan pages with sound and video clips of their favorite television programs. When companies heard of this, they wanted to do it themselves, and sell advertising positions on their pages like with television. Now these companies are pushing for court orders to shut down these fan pages due to copyright infringement (Heyman 78). If someone censored these pages for copyrighted material in the first place, neither the company nor the owner of the page would waste time and money in these legal matters. Now, the company can sue the owner of the page for copyright infringement. All this because some Star Trek fan wanted to share some sound clips with other fans. Most important, online pornography is easily accessible to minors. What are parents to do, usually it is the child in the family who is computer literate. If the child was accessing pornographic material with computers, odds are the parents would never know. Even if the parents are computer literate, children can find it, even without looking for it. When 10 year old Anders Urmachen of New York City hangs out with other kids in America On Line's Treehouse chat room, he has good clean fun. One day, however, when he received a message in e-mail with a file and instructions on how to download it, he did. When he opened the file, 10 clips of couples engaged in heterosexual intercourse appeared on the screen. He called his mother who said, "I was not aware this stuff was online, children should not be subject to these images" (Elmer-Dewitt). Poor Anders Urmachen didn't go looking for pornography, it snuck up on him, and as long as America allows it to happen, parents are going to have to accept the chance that their children may run into that stuff. In addition, for several years the people of Michigan have enjoyed access to the Internet through the state funded program called Mich-Net. The program offers the public free access to the Internet, along with schools throughout the state. On the other hand, the Mich-Net program has one flaw. The program gives anonymity, allowing anyone, of any age, to access anything on the Internet. According to the new Communications Decency Act, which Clinton signed into law February 8, 1996, the government could terminate the entire Mich-Net program because a minor can access pornography through it. This would be a huge loss to the state of Michigan and it's schools. If we were to censor the Internet, minors wouldn't be able to access the material, and the program would have no problems. Furthermore, BBS's offer minors adult material at no cost. While some BBS's that only offer adult material to adults, others make access very simple. Some simply say "Type YES if you are over 18." This is simply unexplainable and unacceptable. Others require a photo copy of a driver license showing the user is over 18, and other operators even require meeting their users. If all it takes to access adult material is hitting three keys, what is stopping children from it. Most young children do not have the ability to decide where they should go and where they should not. If it is available, they are going to want to see what it is. To extend the problem further, these BBS's are usually undetectable to a child's parents. Most BBS's are local phone calls, and are free; the parents will never know if the child is accessing it. For example, the Muskegon area has about 15 BBS's running 24 hours daily. Of these 15, about five operators devote their BBS to adult material. Of these five, only one BBS requires that the user meet the operator before receiving access, while three of the boards simply ask for a photo copy of a drivers license. But that last one has no security whatsoever, and anyone can access anything. None of the five boards charge for access. This is simply unacceptable, we cannot let children access adult material in this manner. Every day thousands of children tune into sex in cyberspace. We do not subject our children to sex on television or other medias, and even if we do, parents have ways to block it. Yet we allowed computers to slip through the grips of parents. Censoring the online community will also strengthen the computer industry and eventually our economy. The longer we wait, the more we hurt ourselves; let's regulate cyberspace before it is too late. Works Cited C|Net. Survey Internet: 29 July 1995. Crandall, Jason. Survey Muskegon, Michigan: 29 Jan. 1996. Elmer-Dewitt, Philip. "On a Screen Near You: Cyberporn." Time 3 July 1995: Proquest. Heyman, Karen. "War on the Web." Net Guide Feb. 1996: 76-80. Huber, Peter. "Electronic Smut." Forbes 31 July 1995: 110. f:\12000 essays\technology & computers (295)\In the Name of Malace or for Business.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ IN THE NAME OF BUSINESS OR FOR MALICE A look into the computer virus by, Michael Ross Engineering 201.02 January 22, 1997 Most of us swap disks with friends and browse the Net looking for downloads. Rarely do we ever consider that we are also exchanging files with anyone and everyone who has ever handled them in the past. If that sounds like a warning about social diseases, it might as well be. Computer viruses are every bit as insidious and destructive, and come in a vast variety of strains. A computer virus tears up your hard drive and brings down your network. However, computer viruses are almost always curable diagnosed, and cures for new strains are usually just a matter of days, not months or years, away. Virus, a program that "infects" computer files (usually other executable programs) by inserting in those files' copies of itself. This is usually done in such a manner that the copies will be executed when the file is loaded into memory, allowing them to infect still other files, and so on. Viruses often have damaging side effects, sometimes intentionally, sometimes not. (Microsoft Encarta 1996) Most viruses are created out of curiosity. Viruses have always been viewed as a well written, creative product of software engineering. I admit there are many out there who create them out of malice, but far more people are just meeting a challenge in software design. The people who make anti-virus software have much more to benefit from the creation of new virii. This is not a slam, just an observation. A common type of virus would be a Trojan Horse, or a destructive program disguised as a game, a utility, or an application. When run, a Trojan Horse does something devious to the computer system while appearing to do something useful (Microsoft Encarta, 1996). A Worm is also a popular type of virus. A worm is a program that spreads itself across computers, usually by spawning copies of itself in each computer's memory. A worm might duplicate itself in one computer so often that it causes the computer to crash. Sometimes written in separate "segments," a worm is introduced secretly into a host system either for "fun" or with intent to damage or destroy information. The term 'Worm' comes from a science-fiction (Microsoft Encarta 1996). Some viruses destroy programs on computers although, the better virii do not. Most virus authors incorporate code that specifically destroys data after the virus determines certain criteria have been met, that is, a date, or a certain number of replications. Many virus do not do a good job of infecting other programs and end up corrupting, or making the program they are trying to infect completely unusable. The purpose of a virus, in many cases, is to infect as many files, with little or no noticeable difference to the user. How does a virus scanner work? Most virus scanners use a very simple method of searching for a particular sequence of bytes that make every virus unique, like a DNA sequence. When a new virus is discovered, a fairly long sequence of bytes from it is inserted into the anti-virus software database. That's why you need to keep them updated. Any virus scanner you buy should handle at least three tasks: virus detection, prevention, and removal. There are some virus scanners that use a method called heuristic scanning. They use 'rules of thumb' that can be used to identify some virii that has not even been put in the virus database yet. What are the rules of thumb? Well, they are basic assembly language clues that make the file suspicious, such as a JMP instruction at the top of the file. No virus scanner is infallible and anyone that tells you so have no idea what they are talking about. The two best virus scanners in my opinion are F-PROT and THUNDERBYTE. They use the heuristic method described above. In conclusion; viruses are, and always will be, a part of the computing world. They have been around since programming began and will continue to thrive as long as computers are used. Technology will force us to adapt and be aware that any information we place on a computer may not be safe. References Deadly New Computer Viruses Want To Kill Your PC usability. By James Daley http://www.headlines.yahoo.com/news/stories originally published in Computer Shopper December 1996 Microsoft Encarta 96; Reference Material Microsoft corporation f:\12000 essays\technology & computers (295)\INTEGRATION OF UMTS AND BISDN IS IT POSSIBLE OR DESIRABLE.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ INTEGRATION OF UMTS AND B-ISDN - IS IT POSSIBLE OR DESIRABLE? INTRODUCTION In the future, existing fixed networks will be complemented by mobile networks with similar numbers of users. These mobile users will have identical requirements and expectations to the fixed users, for on-demand applications of telecommunications requiring high bit-rate channels. It will be necessary for these fixed and mobile networks to interoperate in order to pass data, in real time and at high speeds, between their users. But how far must this interoperation be taken? How much integration of the fixed and mobile network structures is needed? Here, a fixed network, B-ISDN, and a mobile network, UMTS, under development at the same time, are examined to see how well and closely they should work together in order to meet expected user needs. Work already taking place on this is discussed. BACKGROUND The Universal Mobile Telecommunication System (UMTS), the third generation of mobile networks, is presently being specified as part of the European RACE technology initiative. The aim of UMTS is to implement terminal mobility and personal mobility within its systems, providing a single world mobile standard. Outside Europe, UMTS is now known as International Mobile Telecommunications 2000 (IMT2000), which replaces its previous name of Future Public Land Mobile Telecommunication System (FPLMTS). [BUIT95] UMTS is envisaged as providing the infrastructure needed to support a wide range of multimedia digital services, or teleservices [CHEU94], requiring channel bit-rates of less than the UMTS upper ceiling of 2 Mbits/second, as allocated to it in the World Administrative Radio Conference (WARC) '92 bands. UMTS must also support the traditional mobile services presently offered by separate networks, including cordless, cellular, paging, wireless local loop, and satellite services. [BUIT95] Mobile teleservices requiring higher bit rates, from 2 to 155 Mbits/second, are expected to be catered for by Mobile Broadband Services (MBS), the eventual successor to UMTS, which is still under study. [RACED732] Broadband Integrated Services Digital Network (B-ISDN), conceived as an all-purpose digital network that will supersede Narrowband ISDN (N-ISDN or ISDN), is also still being specified. B-ISDN, with its transport layer of Asynchronous Transfer Mode (ATM) is expected to be the backbone of future fixed digital networks. [MINZ89] It is anticipated that, by the year 2005, up to 50% of all communication terminals will be mobile. [CHEU94] The Mobile Green Paper, issued by the European Commission in 1994, predicts 40 million mobile users in the European Union by 2000, rising to 80 million by 2010. This gives mobile users an importance ranking alongside fixed-network users. [BUIT95] One result of this growth in mobile telecommunications will be the increase in teleservice operations that originate in either the fixed or mobile network, but terminate in the other, crossing the boundary between the two. UMTS is expected to be introduced within the next ten years, and integration with narrowband and broadband ISDN is possible in this time. Interoperability between UMTS and ISDN in some fashion will be necessary to support the interoperability between the fixed and mobile networks that users have already come to expect with existing mobile networks, and to meet the expectation of consistency of fixed/mobile service provision laid out in the initial RACE vision. [SWAI94] One way of making UMTS attractive to potential customers is to offer the same range of services that B-ISDN will offer, within the bounds of the lower 2 Mbits/second ceiling of UMTS. [BUIT95] So, with the twin goals of meeting existing expectations and making UMTS as flexible as possible to attract customers, how closely integrated must UMTS be with B-ISDN to achieve this? ALTERNATIVES FOR INTEGRATING UMTS WITH OTHER NETWORKS The UMTS network could be developed along one of the following alternative integration paths: 1. Developing an 'optimised' network structure and signalling protocols tailored for the special mobile requirements of UMTS. This would be incompatible with anything else. Services from all fixed networks would be passed through via gateways. This design-from-scratch method would result in highly efficient intra-network operation, at the expense of highly inefficient inter-network operation, high development cost, scepticism relating to non-standard technology, and slow market take-up. True integration with fixed networks is not possible in this scenario. Given the drawbacks, this is not a realistic option, and it has not been considered in depth. One of the RACE goals was to design UMTS not as a separate overlay network, but to allow integration with a fixed network; this option is undesirable. [BUIT95] 2. Integration with and evolution from the existing Global System for Mobile telecommunication. (GSM, formerly standing for Group Special Mobil during early French-led specification, is now taken as meaning Global System for Mobile communications by the non-French-speaking world.) GSM is currently being introduced on the European market. This option has the advantage of using already-existing mobile infrastructure with a ready and captive market, but at the expense of limiting channel bit-rate considerably, which in turn limits the services that can be made available over UMTS. Some of the technical assumptions of UMTS, such as advanced security algorithms and distributed databases, would require new protocols to implement over GSM. GSM would be limiting the capabilities of UMTS. [BROE93a] 3. Integration with N-ISDN. Like the GSM option above, this initially limits UMTS's channel bit-rate for services, but has a distinct advantage over integration with B-ISDN - N-ISDN is widely available, right now. However, integrating UMTS and N-ISDN would require effective use of the intelligent network concept for the implementation of mobile functions, and modification to existing fixed network protocols to support mobile access. Integrating UMTS with N-ISDN makes possible widespread early introduction and interoperability of UMTS in areas that do not yet have B-ISDN available. This allows wider market penetration, as investment in new B-ISDN equipment is not required, and removes the dependency of UMTS on successful uptake of B-ISDN for interoperability with fixed networks. Eventual interoperability with B-ISDN, albeit with constrictions imposed on UMTS by the initial N-ISDN compatibility, is not prevented. [BROE93a] 4. Integration with B-ISDN. This scenario was the target of MONET (MObile NETwork), or RACE Project R2066. Unlike the above options, B-ISDN's high available bandwidth and feature set does not impose limitations on the service provisioning in UMTS. Fewer restrictions are placed on the possible uses and marketability of UMTS as a result. Development of B-ISDN is taking place at the same time as UMTS, making smooth integration and adaptation of the standards to each other possible. For these reasons, integration of UMTS with B-ISDN has been accepted as the eventual goal for interoperability of future fixed and mobile networks using these standards, and this integration has been discussed in depth. [BROE93a, BROE93b, BUIT95, NORP94] At present, existing B-ISDN standards cannot support the mobile-specific functions required by a mobile system like UMTS. Enhancements supporting mobile functions, such as call handover between cells, are needed before B-ISDN can act as the core network of UMTS. Flexible support of fixed, multi-party calls, to allow B-ISDN to be used in conferencing and broadcasting applications, has many of the same requirements as support for mobile switching, so providing common solutions to allow both could minimise the number of mobile-specific extensions that B-ISDN needs. As an example of how B-ISDN can be adjusted to meet UMTS's needs, let's look at that mobile requirement for support for call handover. Within RACE a multiparty-capable enhancement of B-ISDN, upwardly compatible with Q.2931, has already been developed, and implementing UMTS with this has been studied. For example, a UMTS handover can be handled as a multi-party call, where the cell the mobile is moving to is added to the call as a new party, and the old cell is dropped as a party leaving the call, using ADD(_party) and DROP (_party) primitives. Other mobile functions can be handled by similar adaptations to the B-ISDN protocols. The enhancements to B-ISDN Release 2 and 3 that are required for UMTS support are minimal enough to be able to form an integral part of future B-ISDN standards, without impacting on existing B-ISDN work. [BUIT95] These modifications only concern high-level B-ISDN signalling protocols, and do not alter the transport mechanisms. The underlying ATM layers, including the ATM adaptation layer (AAL) are unaffected by this. THE INTELLIGENT NETWORK The Intelligent Network (IN) is a means for service providers to create new services and rapidly introduce them on existing networks. As the IN was considered useful for implementing mobility procedures in UMTS, it was studied as part of MONET, and is now specified in the Q.1200 series of the ITU-T recommendations. The intelligent network separates service control and service data from basic call control. Service control is then activated by 'trigger points' in the basic call. This means that services can be developed on computers independent of the network switches responsible for basic call and connection control. This gives flexibility to the network operators and service providers, as well as the potential to support the services on any network that supports the trigger points. Eventually, IN can be expanded to control the network itself, such as handling all UMTS mobile functions. [BROE93a] Any network supporting the intelligent network service set will be able to support new services using that service set easily, making integration of networks easier and transparent to the user of those services. The intelligent network is thus an important factor in the integration of B-ISDN and UMTS. UMTS, B-ISDN and the intelligent network set are all being developed at the same time, allowing each to influence the others in producing a coherent, integrated whole. [BUIT95] CONCLUSION In order to be accepted by users as useful and to provide as wide a variety of services as possible, UMTS needs some form of interoperabilty or integration with a fixed network. Integration of UMTS with B-ISDN offers the most flexibility in providing services when compared to other network integration options, and constrains UMTS the least. With the increase in the number of services that will be made available in UMTS and B-ISDN over present standalone services, it is unrealistic to develop two separate, and incompatible, versions of each service for the fixed and mobile networks. Integrating UMTS and B-ISDN makes the same service set available to both sets of users in the same timescale, reducing development costs for the services, and promoting uptake and use in the market. The intelligent network concept allows the easy provision of additional services with little extra development cost. Integrating UMTS with B-ISDN, and with the intelligent network set, is therefore desirable. Work on this integration indicates that the mobile requirements of UMTS can be met by extending existing B-ISDN signalling to handle them, without significantly modifying B-ISDN. Integration of UMTS with B-ISDN is therefore technically feasible. REFERENCES [BROE93a] W. van den Broek, A. N. Brydon, J. M. Cullen, S. Kukkonen, A. Lensink, P. C. Mason, A. Tuoriniemi, "RACE 2066: Functional models of UMTS and integration into future networks", IEE Electronics and Communication Engineering Journal, June 1993. [BROE93b] W. van den Broek and A. Lensink, "A UMTS architecture based on IN and B-ISDN developments", Proceedings of the Mobile and Personal Communications Conference, 13-15 December 1993. IEE Conference Publication 387. [BUIT95] E. Buitenwerf, G. Colombo, H. Mitts, P. Wright, "UMTS: Fixed network issues and design options", IEEE Personal Communications, February 1995. [CHEU94] J. C. S. Cheung, M. A. Beach and J. P. McGeehan, "Network planning for third-generation mobile radio systems", IEEE Communications Magazine, November 1994. [MINZ89] S. E. Minzer, "Broadband ISDN and Asynchronous Transfer Mode (ATM)", IEE Communications Magazine, September 1989. [NORP94] T. Norp and A. J. M. Roovers, "UMTS integrated with B-ISDN", IEEE Communications Magazine, November 1994. [RACED732] IBC Common Functional Specification, Issue D. Race D732: Service Aspects. [SWAI94] R. S. Swain, "UMTS - a 21st century system: a RACE mobile project line assembly vision" END. f:\12000 essays\technology & computers (295)\Internet addiction.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ INVESTIGATIVE RREPORT OF INTERNET ADDICTION Prepared for Dr. Jere Mitchum By Marwan November 4 , 1996 TABLE OF CONTENT LIST OF ILLUSTRATIONS ..........................................................................................................iv ABSTRACT .........................................................................................................................................v INTRODUCTION ............................................................................................................................1 Purpose .........................................................................................................................................1 Growth Of The Internet ............................................................................................................1 THE ADDICTION .............................................................................................................................2 What causes it ..............................................................................................................................2 Symptoms .....................................................................................................................................3 How To Overcome The Addiction ................................................................................................4 The elements of any addiction ...........................................................................................4 CONCLUSION ..................................................................................................................................8 One Last Interesting Question ........................................................................................................9 REFERENCES .................................................................................................................................10 LIST OF ILLUSTRATIONS Figures 1. The number of networks connected to the Internet vs. Time. 2. The percentage of the Internet domains 3. Will the equation people = Internet Users be true in 2001? ABSTRACT Investigative Report of Internet Addiction The problem of Internet addiction is not very noticeable now and that's why not many people are taking it seriously, but what these people are failing to see is the connection between the very rapid growth of the Internet and the addiction problem. It is really simple logic the bigger the Internet get the more users will be which will lead to a bigger number of addicts that can have their lives as well as others corrupted by this behavior. The main objective of this paper is to make sure that all reader know and understand what Internet addiction is and how it can be solved or avoided. I can not offer a professional psychiatric solution but I believe if a person knows more about the addiction, the better chance they have to help themselves as well as others; that's why I have included a short summary of the elements of addiction. I hope that by the time you read my paper you will have a better understanding about this issue to keep yourself as well as others of taking Internet addiction lightly. INTRODUCTION Purpose The purpose of this paper is to make you, the reader, alert and more aware of the newest type of addiction, Internet addiction. Many people would call it exaggeration to classify spending a lot of time on the Internet as an addiction, but since the subject is a fairly new not everybody is taking it as serious as they should be. Growth of the Internet I am sure that everybody knows what the Internet and used it at least a couple of times so there is no need for me to tell you what the Internet is. However, the incredible growth of the size and technology of the Internet is a fact well worth mentioning. Ever since the Internet was commercially introduced to the public late in 1989 the number of the networks that form the Internet have been increasing exponentially. As you can see in figure 1 in the United States there is a new network connected to the Internet every 30 minutes. Figure 1 Number of Networks connected (Source: ftp://nic.merit.edu/statistics/nsfnet) Not all these networks are commercial, some are educational some are for organizations and some are simply networks that provide Internet services. All these different kind of networks can be identified on the Internet by their domain extension, or in other words the last three letters in the address -e.g. http://www.arabia.com is a commercial site because of the .com- in figure 2 the percentage of all four major domains is shown, and it is obvious that the big share goes to the commercial domains. It does not take a genius to figure out that since the Internet attracted that much commercial interest that means that more and more people are using the Internet, and even more are willing to spend time and money on it. Figure 2 (Source of data: http://www.nw.com) THE ADDICTION With such vast growth of the Internet what is considered as a small problem can grow along with the Internet to cause an even bigger problem. In a recent publication in the Los Angeles Times Mathew McAlleseter reported on a survey conducted on the Internet by Victor Brenner who came up with the following results: "17% said that they spend more than 40 hours a week online, 31% said that their work performance had deteriorated since they started using the Internet, 7% got "into hot water" with their employers or schools for Internet related activities" (LA Times, 5/5/1996, pp A-18).However, Brenner acknowledges that his survey is unscientific in many ways; respondents are self-selected and many may be Internet researchers. On the other hand, Dr. Kimberly Young from the University of Pittsgurg-Bradford conducted a more accurate survey that included 396 men and women. In her point of view heavy on-line users in her study all met psychiatric criteria for clinical dependence applied to alcoholics and drug addicts. They had lost control over their Net usage and couldn't end it despite harmful effects on their personal and professional lives. What Causes It Finding a reason for Internet addiction can be as hard as finding a reason for smoking addiction, however, there are a couple of reasons that are obvious for some addicts: * The power of instant access to all sorts of information and all kinds of people is a positive that can be overused. * A different kind of community that can draw people who tend to "shy out" in the real world because this new virtual community does not require the social skill that real life does, all you have to do is be good on the keyboard. * Adopting new personas and playing your favorite kind of personality is not hard when others can not see or hear you. * Last but not least is the fascination with technology. This might be the best excuse -if there such a thing- to be addicted to the Internet, the information super highway, or cyber space. Symptoms When I was trying to collect more information about the symptoms of Internet addiction, I was surprised to find out that almost one half of the sites I visited took Internet addiction as a joke. So as a part of the research I decided to give you the top ten signs you may be addicted to the Internet : 10. You wake up at 3 a.m. to go to the bathroom and stop and check your e-mail on the way back to bed. 9. You get a tattoo that reads "This body best viewed with Netscape Navigator 2.0 or higher." 8. .You write down your URL when asked for your Home Address. 7. You turn off your modem and get this awful empty feeling, like you just pulled the plug on a loved one. 6. You spend half of the plane trip with your laptop on your lap...and your child in the overhead compartment. 5. Your home page sees more action than you do. 4. You start to notice how much this list describes you. 3. People ask why you turn your head to the side when you smile, i.e. :-) . 2. The last girl you picked up was a JPEG image. 1. Your modem burns up. You haven't logged in for two hours. You start to twitch. You pick up the phone and manually dial your service provider access number. You try to hum to communicate with the network. You succeed !! On the more serious side, an Internet based support group for people who suffer from Internet addiction called the Internet Addiction Support Group (IASG) has established the Internet Addiction Disorder (IAD) to be the following: A maladaptive pattern of Internet use, leading to clinically significant impairment or distress as manifested by three (or more) of the following, occurring at any time in the same 12-month period: (I) tolerance, as defined by either of the following: (A) A need for markedly increased amounts of time on Internet to achieve satisfaction. (B) markedly diminished effect with continued use of the same amount of time on Internet. (II) withdrawal, as manifested by either of the following : (A) the characteristic withdrawal syndrome (1) Cessation of (or reduction) in Internet use that has been heavy and prolonged. (2) Two (or more) of the following, developing within several days to a month after Criterion 1: (a) psychomotor agitation. (b) anxiety. (c) obsessive thinking about what is happening on Internet. (d) fantasies or dreams about Internet. (e) voluntary or involuntary typing movements of the fingers. (3) The symptoms in Criterion 2 cause distress or impairment in social, occupational or another important area of functioning. (B) Use of Internet or a similar on-line service is engaged in to relieve or avoid withdrawal symptoms. (III) Internet is often accessed more often or for longer periods of time than was intended. (IV) There is a persistent desire or unsuccessful efforts to cut down or control Internet use. (V) A great deal of time is spent in activities related to Internet use (e.g., buying Internet books, trying out new WWW browsers, researching Internet vendors, organizing files of downloaded materials.) (VI) Important social, occupational, or recreational activities are given up or reduced because of Internet use. (VII) Internet use is continued despite knowledge of having a persistent or recurrent physical, social, occupational, or psychological problem that is likely to have been caused or exacerbated by Internet use (sleep deprivation, marital difficulties, lateness for early morning appointments, neglect of occupational duties, or feelings of abandonment in significant others.) (Source: John Suler, Ph.D. - Rider University May 1996 http://www1.rider.edu/~suler/psycyber/SUPPORTGP.HTML) How To Overcome The Addiction Now that the problem has been established and given a fancy abbreviation (IAD), the next question is what to do about it. Several groups of people created support groups dedicated to help people who suffer from IAD. Some of the most famous support groups is the IASG which can be reached by a-mail at listserv@netcom.com and the Webaholics support group which can be reached on http://www.webaholics.com . However, the main key to getting rid of , or even avoiding, any type of addiction is to understand the basic elements of the addiction. Once you understand these elements you will have a better chance of overcoming the addiction or even not getting it at all. The elements of addiction are : (I) Denial All people who are addicted (to anything) have some degree of denial. Without denial, most addictions would not have become established in the first place. Denial can take many forms. At the milder extremes, a person may believe "I can handle this problem whenever I decide to do so." The fact that one has a problem is at least acknowledged. At the other extreme, denial often takes the form of: "What problem? I don't have a problem. You've got the problem, Dude. And besides, you're beginning to tick me off!" (II) Failing to Ask for Help The second trademark of most addictions is that people affected are very reluctant to ask for help. The mindset of most addicts is: "I can beat this myself." Not only are they reluctant to ask other people for help, but even when they do, they don't accept the advice of others easily. The best thing to do is to look for individuals or professionals who know how to cure addicted people. While these resource people are rare, you should keep looking for them. If you hook up with someone who claims to have this ability, look at your results and don't hang around too long with this person if you don't see yourself making progress. Keep looking for the right experienced helper and you will eventually find one that works well with you. (III) Lack of Other Pleasures In Other Activities One thing that is true about most addictions is they are often either the only or the strongest source of pleasure and satisfaction in a person's life. People who become addicted often do so because their lives are not fulfilling. They can't seem to find passion, enjoyment, adventure, or pleasure from life itself, so they have to get these pleasures in other ways. This becomes important when you try to end your addiction. If you try to eliminate your main source of pleasure in life without being able to replace it immediately with other sources of pleasure, it is doubtful you will be able to stay away from your addictive behaviour very long. (IV) Underlying Deficiencies in Other Aspects of Life Addiction should never be viewed as a problem in and of itself. Addictions are much better viewed as a symptom of other underlying problems and deficiencies. This is why most addiction therapies are so universally unsuccessful. To cure most addictions, you must look beyond the addiction itself and deal with underlying deficiencies in coping and life management skills that have given rise to it. For example, people who become addicted to alcohol and other drugs usually have serious deficiencies in their life management, stress management, and interpersonal skills. Early on in life, they experience a great deal of pain and personal suffering that they can't figure out how to deal with effectively. This drives them to seek external relief and comfort in the form of alcohol or other substances. As this pattern of behaviour gets repeated over time, their bodies become physically addicted to the chemical substance, and the addiction then becomes even more difficult to end. The same is true for cigarette addiction. Many people find that smoking helps them cope with stress or keep their weight under control. Even if they are successful at beating the physical part of cigarette addiction, they often quickly return to smoking because they fail to improve their repertoire of coping skills. So if you are trying to deal with the problem of Internet Addiction, or any addiction for that matter, you should ask yourself the following questions: 1. What stress management skills or life management skills do I lack that led me to become addicted? 2. What problems in life do I have that my addiction helps me to avoid or to "solve." 3. What would I need to learn how to do in order to let go of my addictive behaviour? 4. What "benefits" or payoffs am I getting from my addictive behaviour? (V) Giving in to Temptation Once you decide to eliminate an established addiction, there are certain requirements and pitfalls you must be prepared for. One of these is dealing with temptation. Whenever you try to stay away from something that previously gave you great pleasure, you're going to be tempted to return to that behaviour. Sometimes, the temptation may be very strong. But even if it is, you must be prepared to resist it. Temptation, in truth, is nothing more than a powerful internal feeling state ,i.e. a desire. It is often accompanied by thoughts as well, that are designed to make you "cave in" and satisfy your intense internal cravings. You, however, are always much stronger than any of your internal thoughts, feelings, or other internal states. You have the power to consistently ignore or to choose not to respond to your thoughts and demanding feelings. Thoughts and feelings have very little power at all (even though many people mistakenly "feel" that their thoughts and feelings are much more powerful than they). Once you take on the challenge of dealing with any addiction, you will need to marshal your ability to successfully deal with temptation. If you don't have a sense that you have this power to succeed, you can use your addiction as an opportunity to discover that you really do have this important capability. (VI) Failing to Keep Your Word In order to change any established habit, be it an addiction or not, you must be able to give your word to yourself and KEEP YOUR WORD NO MATTER WHAT HAPPENS. All behaviour change involves deciding what actions are needed to break the established pattern and then taking those actions on a consistent basis over time. This is just another way of saying "you must give your word to yourself every day that you will do this or that or not do this or that. Then you must keep your word, no matter what happens around you or what temptations or seductive excuses you encounter." Many addiction treatment programs fail because addicts are not empowered to rehabilitate their ability to give and keep their word. Many addicts, experience has shown, are very accomplished liars. Their promises and statements to others often can't be trusted. And their ability to keep promises to themselves is similarly impaired. Without the ability to give and keep your word, especially to yourself, you've got very little chance of curing any addiction. On the other hand, if you make this goal part of your overall game plan, you may be able to emerge from your addiction a stronger, healthier, and more trustworthy human being. (VII) Failing to Do What May Be Necessary Be very clear about this one important point: ALL ADDICTIONS CAN BE CURED AS LONG AS THEY AGREE TO DO WHATEVER MIGHT BE NECESSARY. One reason most addictions appear to be "incurable" is because people shy away from the types of actions that are often necessary. What types of actions are these? Well, they can be numerous, diverse, and highly specific for any individual. They might include any or all of the following (using Internet Addiction as an example): 1. Setting an absolute schedule or time limit for how much time you spend on the Internet. 2. Forcing yourself to stay away from the Internet for several days at a time. 3. Placing self-imposed computer "blocks" on certain types of recreational programs, which include the web browser. 4. Setting an absolute policy for yourself of never signing on to the net at work (unless this is required for your study). 5. Establishing meaningful (but not harmful) consequences for yourself for failing to keep your word. 6. Applying these self-imposed consequences until you do regain your ability to keep your word consistently. 7. Forcing yourself to do other things instead of spending time on the net. 8. Resolving to learn how to derive other more healthy sources of pleasure in life to replace or even exceed the pleasure you got from being on the Internet. 9. Asking for help whenever you feel you are not being successful. 10. Avoiding people or environments that might encourage you to return to your addictive behaviour, this might be impossible in college but it still is a good point. These are not the only actions that can be taken, many of them will work for a majority of individuals. The point is that in order to cure an addiction, you've got to be willing to do things that may seem drastic or outrageous but not harmful to yourself or others. So if you have a history of failing to make any type of desired behaviour change, all this may mean is that you weren't willing to do what is necessary. All addictions (and other dysfunctional behaviours) can ultimately be cured. It's just a matter of figuring out what specific actions will work (and will not cause you or others harm) and then executing those actions despite any thoughts or feelings you might have to the contrary. (VIII) Failing to Anticipate and Deal With Relapses No matter how much initial success you have in eliminating an addiction, unintended relapses are just around the corner. Something unexpected might happen in your life or you might otherwise succumb to a moment of weakness. Good addiction treatment plans anticipate that such relapses commonly occur and prepare individuals to deal with them successfully. A relapse does not mean that you have failed in your efforts to cure yourself of an addiction. If you stay away from cigarettes for 3 months and then smoke again for two days in a row, you can view this as a "failure" if you want, or you can focus on the fact that of the last 92 days, you successfully abstained for 97% of them. That's pretty good. The trick is to keep 2 days from becoming 5 days, or 5 days from becoming 10 days, etc. Here you will need a game plan to keep an occasional relapse from triggering a return to the addiction. Once you understand these elements, chances are you will not be and addict for long. And for those who were close, I don't think that you are smart enough not to get sucked in. CONCLUSION Internet addiction is a serious addiction that should not be taken lightly, it might not be life threatening like some drug addiction, but it can very harmful to the person professional and personal life. The key to staying away from this addiction is to understand its elements and have a strong will power to control one's self from all the temptations that the Internet might provide. One Last Interesting Question We all know that more and more people are gaining access to the Internet some way or another, but not every body had the chance of looking at figure 3 ! Figure 3. Will the equation people = Internet Users be true in 2001? (Source: ftp://nic.merit.edu/statistics/nsfnet) REFERENCES Elias, M. (7/7/1996) Net overuse called "true addiction", USA Today, pp 1-A. McAllester, M. (5/5/1996), Study says some may be addicted to the Net; Bulldog Edition., Los Angeles Times, , pp A-18. Network Wizards, [online] Available URL: http://www.nw.com/zone/ Rodgers, J. (1994), Treatments that works, Vol. 27, Psychology Today, pp 34. Young, Kimberly, Centre of on-line addiction (COLA), [online] Available URL: http://www.pitt.edu/~ksy/ Merit Network Inc., [online] Available URL: ftp://nic.merit.edu/statistics/nsfnet/ iv f:\12000 essays\technology & computers (295)\internet beyond human control.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Internet Beyond Human Control The Internet has started to change the way of the world during this decade. More homes, companies, and schools are getting hooked online with the Internet during the past few years. This change has started to become the new way of life present and future. The Internet system is so advanced it is ahead of our time. This system is becoming predominately used everyday, but every which way it works out this system ends up in a negative way. The Internet System has started to migrate in many schools. The Schools that are hooked online are mostly colleges. This is because the Internet is capable of flashing up pornographic picture or comments at anytime. Also their is many different chat lines that consist of a lot of profanity and violence. A majority of high school students are minors. This is why most colleges are hooked up online to the Internet system. The government is trying to figure out ways to police the Internet so this will not happen. The problem with that is it is a very hard task to do. It is almost guaranteed this will not happen for another five to ten years. Being hooked up online helps make high school easy to slide through. There is a student at Chichester Senior High School that has a home computer hooked online with the Internet system. So when he has a term paper due all he does is down load a term paper on the system with the same topic. He just puts his name on the paper, hands it in, and receives an A. In return when he hits college life he will not know how to write a term paper. This will cause him to drop out. I know other students do the same thing he does. Now students will come out of high school not well educated. The Internet system is set up in a way we can give and receive mail. This mail is called electronic mail usually known as e-mail. This mail will be sent to where you want it the second you click send with the mouse. The regular U.S. mail takes two days if you are sending mail from Philadelphia to Media. Now if you mail from coast to coast that could take up to two weeks. When my parents went to Mexico for two weeks they tried to send me a postcard, but I didn't receive it till the next day they came back. This could very well end up to become a problem. Soon no one will even want to use U.S. mail. A big part of the government money is from the costs of stamps. If no one uses U.S. mail ; there will be nobody buying stamps. Now the government is not bringing the money they need. So for that the government must raise taxes. The Internet system will start taking over a large amount of jobs. Through the Internet anyone can buy items like CDs, tapes, or sheet music. Within a matter of five to ten years there will not be any music stores in business. They will all be online through the Internet. The problem is already happening. Out on the west coast a person can go grocery shopping on the Internet. All a person has to do is go to the net search and type food stores. Than go to Pathmark or Acme, make your list, and order the food on Visa. After that is all done go to the store you ordered from, and pick up your groceries already pack in bags. This will eventually drift across to the east coast. If this happens we are looking at the biggest percent of unemployment ever. There is at least two hundred employees working in just one grocery store, and there millions of grocery stores located in the U.S. That means their will be no need for any grocery stores to stay in business. Where are all these employees going to go? When a person registers for the Internet system, they have to give their full name and address. If anyone wants to look up a person they can easily. All a person has to do is type "finger: e-mail number" at the prompt. So nothing is confidential in the Internet system. If you start talking to someone on a chat line he could know your address. If he was a serial killer or burglar you are a easy target. If the person is a hacker he can find your social security number and change your whole identity. The movie called "The Net" is a good example because that could actually happen. That movie is about a woman who worked for a computer company. She spent all of her freetime on the Internet chat lines. She had a hold of a disk that someone wanted on that chatline. So she went away on vacation, and when she returned her whole identity was changed. She was a completely different person. The Internet system is so advanced nobody knows how to deal with the negativity. The government should of never allowed this system to be released until their was some way to police it. Now this could cause all of these problems and there is no way to deal with it happening. The only way people will have jobs if they know computers. Soon there will have to be blockers in the system to stop people from finding so much information about other people. If nothing happens soon their will be nothing the government can do. If there is anyway the Internet could be shut down till they find a way to solve these problems the Internet will work to our advantage. f:\12000 essays\technology & computers (295)\Internet Censorship.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Hi this is CYRAX'S paper I got a real good grade on this paper. Come visit my web site v at http://free.websight.com/cyrax see u there Zya For centuries governments have tried to regular materials deemed inappropriate or offensive. The history of western censorship was said to have begun when Socrates was accused "firstly, of denying the gods recognized by the State and introducing new divinities, and secondly of corrupting the young." He was sentenced to death for these crimes. Many modern governments are attempting to control access to the Internet. They are passing regulations that restrict the freedom people once took for granted. The Internet is a world wide network that should not be regulated or censored by any on country. It is a complex and limitless network which allows boundless possibilities and would be effected negatively by the regulations and censorship that some countries are intent on establishing. Laws that are meant for other types of communication will not necessarily apply in this medium. There are no physical locations where communications take place, making it difficult to determine where violations of the law should be prosecuted. There is anonymity on the Internet and so ages and identities are not known this makes it hard to determine if illegal activities are taking place in regards to people under the legal age. As well, it is difficult to completely delete speech once it has been posted, Meaning that distributing materials that are obscene are banned becomes easy The American Library Association (ALA) has a definition that states censorship is "the change in the access status of material, made by a governing authority or its representatives. Such changes include: exclusion, restriction, remove, or age/grade level changes." This definition, however, has a flaw in that it only recognizes one form of censorship-governmental censorship. Cyberspace, a common name for the Net, has been defined by one author as being "made up of millions of people who communicate with one another through computers. It is also "information stored on millions of computers worldwide, accessible to others through telephone lines and other communication channels "that" make up what is known as cyberspace." The same author went on to say " term itself is elusive, since it is not so much a physical entity as a description of an intangible." The complexity of the Internet is demonstrated through its many components. The most readily identifiable part is the World Wide Web (WWW). This consists of web pages that can be accessed through the use of a web browser. Web pages are created using a basic programming language. Another easily identified section of the Internet is e-mail. Once again it is a relatively user-friendly communication device. Some other less publicized sections of the Internet include: Internet Relay Chat (IRC), which allows real time chatting to occur among thousands of people, Gopher, which works similarly to the WWW but for a more academic purpose, and File Transfer Protocol (FTP), Which allows the transfer of files from one computer to another. Another service that is not Internet but is carried along with it in many instances is Usenet or News. In Usenet there are many newsgroups which center their conversations on varied topics. For example, rec.music.beatles would focus the discussion on the Beetles. This would be done through posts or articles, almost like letters sent into a large pot where everyone can read and reply. Many controversial newsgroups exist and they are created easily. It is possible to transfer obscene and pornographic material through these newsgroups. There is no accurate way to determine how many people are connected to the Internet because the number grows so rapidly everyday. Figures become obsolete before they can be published. "[The Internet] started as a military strategy and, over thirty years later, has evolved into the massive networking of over 3 million computers worldwide". One of the most prominent features of the young Internet was it had freedom. It is " a rate example of a true, modern, functional anarchy...there are no official censors, no bosses, no board of directors, no stockholders". It is an open forum where the only thing holding anyone back is a conscience. The Internet has "no central authority" and therefore it makes it difficult to be censored. As a result of these and more, the Internet offers potential for a true democracy. The freedom of speech that was possible on the Internet could now be subjected to governmental approvals. For example, China is attempting to restrict political expression, in the name of security and social stability. It requires users of the Internet and e-mail to register, so that it may monitor their activities. In the United Kingdom, state secrets and personal attacks are off limits on the Internet. Laws are strict and the government is extremely interested in regulating the Internet especially these issues. Laws intended for other types of communication will not necessarily apply in this medium. Through all the components of the Internet it becomes easy to transfer material that particular governments might find objectionable. However, all of these ways of communicating on the Internet make up a large and vast system. For inspectors to monitor every E-mail, Webpage, IRC channel, Gopher site, Newsgroups, and FTP site would be near impossible. This attempt to censor the Internet would violate the freedom of speech rights that are included in democratic constitutions and international laws. It would be a violation of the First Amendment. The Constitution of the United States of America declares that "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances" Therefore it would be unconstitutional for any sort of censorship to occur on the Internet and affiliated services. Despite the of being illegal restrictions on Internet access and content are increasing world-wide under all forms of government. In France, a country where the press generally have a large amount of freedom, the Internet has recently been in the spotlight. "To enforce censorship of the Internet, free societies find that they become more repressive and closed societies find new ways to crush political expression and opposition" Vice-President Al Gore, while at an international conference in Brussels about the Internet, in a keynote address said that "[Cyberspace] is about protecting and enlarging freedom of expression for all our citizens...Ideas should not be checked at the border" Another person attending that conference was Ann Breeson of the American Civil Liberties Union, an organization dedicated to preserving many things including free speech. She is quoted as saying "Our big victory at Brussels was that we pressured them enough so that Al Gore in his keynote address make a big point of stressing the importance of free speech on the Internet." Many other organizations have fought against laws and have succeeded. A good example of this is the fight that various groups put on against the recent Communication Decency Act (CDA) of the U.S. Senate. The Citizens Internet Empowerment Coalition on February 26,1996 filed a historic lawsuit in Philadelphia against the U.S. Department of Justice and Attorney General Janet Reno to make certain that the First Amendment of the U.S.A. would not be compromised by the CDA. The plaintiffs alone, including American Booksellers Association, the Freedom to Read Foundation, Apple, Microsoft, America Online, the Society of Professional Journalists, the Commercial Internet eXchange Association, Wired, and HotWired, along with thousands of netizens (citizens of the Internet) shows the dedication that is felt by many different people and groups to the cause of free speech on the Internet. Just recently in France, a high court has struck down a bill that promoted the censorship of the Internet. Other countries have attempted similar moves. The Internet cannot be regulated in the way of other mediums simply because it is not the same as anything else that we have. It is a totally new and unique form of communication and deserves to be given a chance to prove itself. Laws of one country and this is applicable to the Internet because there are no borders. Although North American (mainly the U.S.A.) has the largest share of servers, the Internet is still a world-wide network. This means that domestic regulations can not oversee the rules of foreign countries. It would be just as easy for an American teen to download (receive) pornographic material form England, as it would be from down the street. One of the major problems is the lack of physical boundaries, making it difficult to determine where violations of the law should be prosecuted. There is no one place through which all information passes. That was one of the key points that was stressed during the original days of the Internet, then called ARPANET. It started out as a defense project that would allow communication in the event of an emergency such as nuclear attack. Without a central authority, information would pass around until it got where it was going. Something like a road system. It is not necessary to take any specific route, but rather anyone goes. In the same way the information on the Internet starts out and eventually gets to it's destination. The Internet is full of anonymity. Since text is the standard form of communication on the Internet it becomes difficult to determine the identity and/or age of a specific person. Nothing is known for certain about a person accessing content. There are no signatures or photo-ids on the Internet therefore it is difficult to certify that illegal activities (regarding minors accessing restricted data) are taking place. Take for example a conversation on IRC. Two people could be talking to one another, but all that they see is text. It would be extremely difficult, if not impossible, to know for certain the gender and/or age just from communication like this. Then if the conversationalist lies about any points mentioned above it would be extremely difficult to know or prove otherwise. In this way governments could not restrict access to certain sites on the basis of ages. A thirteen year old boy in British Columbia could decide that he wanted to download pornography from an adult site in the U.S. The sire may have warnings and age restrictions but they have no way of stopping him from receiving their material if he says he is 19 years old when prompted. The complexity in the way information is passed around the Internet means that if information has been posted, deleting this material becomes almost impossible. The millions of people that participate on the Internet everyday have access to almost all of the data present. As well it becomes easy to copy something that exists no the Internet with only a click of a button. The relative ease of copying data means the second information is posted to the Internet it may be archived somewhere else. There are in fact many sites on the Internet that are devoted to the archiving of information including: Walnut Creek's cdrom.com, which archives an incredible amount of software among others, The Internet Archive-www.archive.org, which is working towards archiving as much of the WWW as possible, and The Washington University Data Archive, Which is dedicated towards archiving software, publications, and many other types of data. It becomes hard to censor material that might be duplicated or triplicated within a matter of minutes. The Internet is much too complex of a network for censorship to effectively occur. It is a totally new and unique environment in which communications take place. Existing laws are not applicable to this medium. The lack of touchable boundaries cause confusion as to where violations of law take place. The Internet is made up of nameless interaction and anonymous communication. The complexity of the Internet makes it near impossible to delete data that has been publicized. No one country should be allowed to, or could, regulate or Censor the Internet f:\12000 essays\technology & computers (295)\Internet in the Classroom.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Internet in the Classroom The Internet is a network of millions of computers worldwide, connected together. It is an elaborate source of education, information, entertainment, and communication. Recently, President Bill Clinton expressed an idea to put the Internet into every classroom in America by the year 2000[4]. Considering the magnitude of this problem, and the costs involved, it is not realistically possible to set this as a goal. The Internet allows the almost five million computers [1] and countless users of the system to collaborate easily and quickly either in pairs or in groups. Users are able to access people and information, distribute information, and experiment with new technologies and services. The Internet has become a major global infrastructure used for education, research, professional learning, public service, and business. The costs of setting up and maintaining Internet access are varied and changing. Lets take a look at some of the costs of setting up Internet service in a typical school. First comes the hardware. Hardware required is generally a standard Windows-based PC or Macintosh and a 14.4 Kbs or higher modem. This will cost about $1000 apiece. If the average school has 50 classrooms, already the cost has risen to $50,000 per school, for only one connection per classroom. Next you need actual Internet service. For 24-hour connections expect to pay $100 or more per month, per account. If a school plans to have more than a few individual Internet users, it will need to consider a network with a high-speed dedicated line connected to the Internet. This school network would probably be a small- or medium-sized network in a single building or within very few geographically close buildings. Connecting an entire school may require more than one specific LAN(Local Area Network). Most high-speed Internet connections are provided through a dedicated leased line, which is a permanent connection between two points. This provides a high quality permanent Internet connection at all times. Most leased lines are provided by a telephone company, a cable television company, or a private network provider and cost $200 per month or more. The typical connection from a LAN or group of LANs to the Internet is a digital leased line with a Channel Service Unit/Data Service Unit (CSU/DSU), which costs between $600 and $1000. When budgeting for a school's Internet connection there are a number of factors to consider that might not seem immediately obvious. Technical support and training will incur additional ongoing costs, even if those costs show up only as an individual's time spent. Equipment will need to be maintained and upgraded as time passes, and even when all teachers have received basic Internet training, they will most likely have questions as they explore and learn more on their own. A general rule for budget planning is this: for every dollar you spend on hardware and software, plan to spend three dollars to support the technology and those using it[2]. There are approximately 81,000 public schools in America. Within these schools, there are about 46.6 million children in kindergarten through 12th grade[3]. Considering an average of about 50 classrooms per school, at an average cost of $1,000 per classroom for one connection(an extremely low estimate), this will give president Clinton's idea a price tag of roughly $4 billion. This estimate does not even begin to take into account the costs of constant upgrades, full-time technicians, and structural changes required to install these systems. When you look into the actual facts of a problem, sometimes you see that certain ideas are not at all plausible. Putting Internet access into our nation's schools is an excellent idea, but do we really need it? Considering that all major and most minor colleges offer a wide range of Internet services, it is not necessary to have that same service in our public schools. Bill Clinton's idea of putting Internet service into every classroom in America by the year 2000 is not realistically possible. When you look into the facts, it is obvious that this plan has not been thought out at all, and will not be put into effect. References [1] Malkin, G., and A. Marine, "FYI on Questions and Answers: Answers to Commonly Asked 'New Internet User' Questions", FYI 4, RFC 1325, Xylogics, SRI, May 1992. [2]Answers to Commonly Asked "Primary and Secondary School Internet User" Questions Author: J. Sellers, NASA NREN/Sterling Software [3] NATIONAL CENTER FOR EDUCATION STATISTICS E.D. TABS July 1995 [4] The Whit, Rowan College paper f:\12000 essays\technology & computers (295)\Internet Inventions.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Several inventions have changed the way people communicate with each other. From the old fashioned telegraph to today's modern electronic forms of communicating, people have beencreating easier ways to correspond. Electronic communication, such as e-mail and other internet offerings, have created a cheap and incredibly fast communications system which is gaining steady popularity. E-mail is basically information, usually in letter form,addressed to a destination on the internet. The internet is aninternational web of interconnected networks--in essence,  anetwork of networks; these consist of government, education, and business networks. Software on these networks between the source and destination networks "read" the addresses on packets and forward them toward their destinations. E-mail is a very fast and efficient way of sending information to any internet location. Once an e-mail is sent, it arrives at its destination almost instantly. This provides people with a way to communicate with people anywhere in the world quickly without the costs of other forms of communicating such as telephone calls or postage for letters. The savings to be gained from e-mail were enough of an inducement for many businesses to invest heavily in equipment and network connections in the early 1990s. The employees of a large corporation may send hundreds of thousands of pieces of E-mail over the Internet every month, thereby cutting back on postal and telephone costs. It is not uncommon to find internet providers from twenty to thirty dollars a month for unlimited access to internet features. Many online services such as America Online and Prodigy offer e-mail software and internet connections which work in an almost identical way, however, the cost is more expensive. The World Wide Web (WWW) and USENET Newsgroups are amongother internet offerings which have changed the way people communicate with each other. The WWW can be compared to a electronic bulletin board where information consisting o fanything can be posted. One can create visual pages consisting of text and graphics which become viewable to anyone with WWW access. Anything from advertisements to providing people with information and services can be found on the WWW. File transfers between networks can also be accomplished on the WWW though Gopher and FTP (File Transfer Protocol) sites. Newsgroups are very similar, but run in a different way. Newsgroups basically create a forum where people can discuss a vast array of subjects. There are thousands of newsgroups available. Once one finds a subject that interests them, they may post notes which are visible to anyone visiting that particular newsgroup, and others may respond to such notes. Again, this can be advertising, information, or, more commonly, gossip. Though the internet can be a convenient way of communication, it can become problematic. Networks can shut down resulting in lost e-mail and WWW sites and newsgroups to be down for an amount of time. Another problem is the addicting factorassociated with most online services. One can become attached toan online service as they are thrilled they can meet people al lover the world. Much spare time can be used e-mailing and surfing the net creating a lack of real human interaction for such an individual. Though this may not be a big concern for most people, it is considered more healthy to be active rather than sitting in front of a computer for hours a day. Also, the need for variety can cause one to subscribe to many providers with varying costs, creating large monthly bills. Though the lack of human interaction may seem like aproblem, technology is continuing to create new ways to morefully interact with people on the internet. New inventions suchas the I- Phone and miniature video cameras are further changingthe way we communicate with each other. Now, with the I-Phone,one can actually talk with people over the internet with thetelephone without normal long distance calling charges. Also,with the new video cameras which can be connected to the computer, people can actually see who they are talking to,regardless of location. No longer are people confining themselves to a room typing information to one another; they're interacting with more progression. Electronic communication is proving to be the way of the future. The affordable and sufficient system of exchanginginformation is still gaining popularity and people, as well asbusinesses, utilizing its many services. f:\12000 essays\technology & computers (295)\Internet regulation.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Government Intervention of the Internet During the past decade, our society has become based solely on the ability to move large amounts of information across large distances quickly. Computerization has influenced everyone's life. The natural evolution of computers and this need for ultra-fast communications has caused a global network of interconnected computers to develop. This global net allows a person to send E-mail across the world in mere fractions of a second, and enables even the common person to access information world-wide. With advances such as software that allows users with a sound card to use the Internet as a carrier for long distance voice calls and video conferencing, this network is key to the future of the knowledge society. At present, this net is the epitome of the first amendment: free speech. It is a place where people can speak their mind without being reprimanded for what they say, or how they choose to say it. The key to the world-wide success of the Internet is its protection of free speech, not only in America, but in other countries where free speech is not protected by a constitution. To be found on the Internet is a huge collection of obscene graphics, Anarchists' cookbooks and countless other things that offend some people. With over 30 million Internet users in the U.S. alone (only 3 million of which surf the net from home), everything is bound to offend someone. The newest wave of laws floating through law making bodies around the world threatens to stifle this area of spontaneity. Recently, Congress has been considering passing laws that will make it a crime punishable by jail to send "vulgar" language over the net, and to export encryption software. No matter how small, any attempt at government intervention in the Internet will stifle the greatest communication innovation of this century. The government wants to maintain control over this new form of communication, and they are trying to use the protection of children as a smoke screen to pass laws that will allow them to regulate and censor the Internet, while banning techniques that could eliminate the need for regulation. Censorship of the Internet threatens to destroy its freelance atmosphere, while wide spread encryption could help prevent the need for government intervention. Jim Exon, a democratic senator from Nebraska, wants to pass a decency billregulating the Internet. If the bill passes, certain commercial servers that post pictures of unclad beings, like those run by Penthouse or Playboy, would of course be shut down immediately or risk prosecution. The same goes for any amateur web site that features nudity, sex talk, or rough language. Posting any dirty words in a Usenet discussion group, which occurs routinely, could make one liable for a $50,000 fine and six months in jail. Even worse, if a magazine that commonly runs some of those nasty words in its pages, The New Yorker for instance, decided to post its contents on-line, its leaders would be held responsible for a $100,000 fine and two years in jail. Why does it suddenly become illegal to post something that has been legal for years in print? Exon's bill apparently would also "criminalize private mail," ... "I can call my brother on the phone and say anything--but if I say it on the Internet, it's illegal" (Levy 53). Congress, in their pursuit of regulations, seems to have overlooked the fact that the majority of the adult material on the Internet comes from overseas. Although many U.S. government sources helped fund Arpanet, the predecessor to the Internet, they no longer control it. Many of the new Internet technologies, including the World Wide Web, have come from overseas. There is no clear boundary between information held in the U.S. and information stored in other countries. Data held in foreign computers is just as accessible as data in America, all it takes is the click of a mouse to access. Even if our government tried to regulate the Internet, we have no control over what is posted in other countries, and we have no practical way to stop it. The Internet's predecessor was originally designed to uphold communications after a nuclear attack by rerouting data to compensate for destroyed telephone lines and servers. Today's Internet still works on a similar design. The very nature of this design allows the Internet to overcome any kind of barriers put in its way. If a major line between two servers, say in two countries, is cut, then the Internet users will find another way around this obstacle. This obstacle avoidance makes it virtually impossible to separate an entire nation from indecent information in other countries. If it was physically possible to isolate America's computers from the rest of the world, it would be devastating to our economy. Recently, a major university attempted to regulate what types of Internet access its students had, with results reminiscent of a 1960's protest. A research associate, Martin Rimm, at Carnegie Mellon University conducted a study of pornography on the school's computer networks. He put together quite a large picture collection (917,410 images) and he also tracked how often each image had been downloaded (a total of 6.4 million). Pictures of similar content had recently been declared obscene by a local court, and the school feared they might be held responsible for the content of its network. The school administration quickly removed access to all these pictures, and to the newsgroups where most of this obscenity is suspected to come from. A total of 80 newsgroups were removed, causing a large disturbance among the student body, the American Civil Liberties Union, and the Electronic Frontier Foundation, all of whom felt this was unconstitutional. After only half a week, the college had backed down, and restored the newsgroups. This is a tiny example of what may happen if the government tries to impose censorship (Elmer-Dewitt 102). Currently, there is software being released that promises to block children's access to known X-rated Internet newsgroups and sites. However, since most adults rely on their computer literate children to setup these programs, the children will be able to find ways around them. This mimics real life, where these children would surely be able to get their hands on an adult magazine. Regardless of what types of software or safeguards are used to protect the children of the Information age, there will be ways around them. This necessitates the education of the children to deal with reality. Altered views of an electronic world translate easily into altered views of the real world. "When it comes to our children, censorship is a far less important issue than good parenting. We must teach our kids that the Internet is a extension and a reflection of the real world, and we have to show them how to enjoy the good things and avoid the bad things. This isn't the government's responsibility. It's ours (Miller 76)." Not all restrictions on electronic speech are bad. Most of the major on-line communication companies have restrictions on what their users can "say." They must respect their customer's privacy, however. Private E-mail content is off limits to them, but they may act swiftly upon anyone who spouts obscenities in a public forum. Self regulation by users and servers is the key to avoiding government imposed intervention. Many on-line sites such as Playboy and Penthouse have started to regulate themselves. Both post clear warnings that adult content lies ahead and lists the countries where this is illegal. The film and videogame industries subject themselves to ratings, and if Internet users want to avoid government imposed regulations, then it is time they begin to regulate themselves. It all boils down to protecting children from adult material, while protecting the first amendment right to free speech between adults. Government attempts to regulate the Internet are not just limited to obscenity and vulgar language, it also reaches into other areas, such as data encryption. By nature, the Internet is an insecure method of transferring data. A single E-mail packet may pass through hundreds of computers from its source to destination. At each computer, there is the chance that the data will be archived and someone may intercept that data. Credit card numbers are a frequent target of hackers. Encryption is a means of encoding data so that only someone with the proper "key" can decode it. "Why do you need PGP (encryption)? It's personal. It's private. And it's no one's business but yours. You may be planning a political campaign, discussing our taxes, or having an illicit affair. Or you may be doing something that you feel shouldn't be illegal, but is. Whatever it is, you don't want your private electronic mail (E-mail) or confidential documents read by anyone else. There's nothing wrong with asserting your privacy. Privacy is as apple-pie as the Constitution. Perhaps you think your E-mail is legitimate enough that encryption is unwarranted. If you really are a law-abiding citizen with nothing to hide. What if everyone believed that law-abiding citizens should use postcards for their mail? If some brave soul tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he's hiding. Fortunately, we don't live in that kind of world, because everyone protects most of their mail with envelopes. So no one draws suspicion by asserting their privacy with an envelope. There's safety in numbers. Analogously, it would be nice if everyone routinely used encryption for all their E-mail, innocent or not, so that no one drew suspicion by asserting their E-mail privacy with encryption. Think of it as a form of solidarity (Zimmerman)." Until the development of the Internet, the U.S. government controlled most new encryption techniques. With the development of faster home computers and a worldwide web, they no longer hold control over encryption. New algorithms have been discovered that are reportedly uncrackable even by the FBI and the NSA. This is a major concern to the government because they want to maintain the ability to conduct wiretaps, and other forms of electronic surveillance into the digital age. To stop the spread of data encryption software, the U.S. government has imposed very strict laws on its exportation. One very well known example of this is the PGP (Pretty Good Privacy) scandal. PGP was written by Phil Zimmerman, and is based on "public key" encryption. This system uses complex algorithms to produce two codes, one for encoding and one for decoding. To send an encoded message to someone, a copy of that person's "public" key is needed. The sender uses this public key to encrypt the data, and the recipient uses their "private" key to decode the message. As Zimmerman was finishing his program, he heard about a proposed Senate bill to ban cryptography. This prompted him to release his program for free, hoping that it would become so popular that its use could not be stopped. One of the original users of PGP posted it to an Internet site, where anyone from any country could download it, causing a federal investigator to begin investigating Phil for violation of this new law. As with any new technology, this program has allegedly been used for illegal purposes, and the FBI and NSA are believed to be unable to crack this code. When told about the illegal uses of his programs, Zimmerman replies: "If I had invented an automobile, and was told that criminals used it to rob banks, I would feel bad, too. But most people agree the benefits to society that come from automobiles -- taking the kids to school, grocery shopping and such -- outweigh their drawbacks." (Levy 56). The government has not been totally blind to the need for encryption. For nearly two decades, a government sponsored algorithm, Data Encryption Standard (DES), has been used primarily by banks. The government always maintained the ability to decipher this code with their powerful supercomputers. Now that new forms of encryption have been devised that the government can't decipher, they are proposing a new standard to replace DES. This new standard is called Clipper, and is based on the "public key" algorithms. Instead of software, Clipper is a microchip that can be incorporated into just about anything (Television, Telephones, etc.). This algorithm uses a much longer key that is 16 million times more powerful than DES. It is estimated that today's fastest computers would take 400 billion years to break this code using every possible key. (Lehrer 378). "The catch: At the time of manufacture, each Clipper chip will be loaded with its own unique key, and the Government gets to keep a copy, placed in escrow. Not to worry, though the Government promises that they will use these keys to read your traffic only when duly authorized by law. Of course, to make Clipper completely effective, the next logical step would be to outlaw other forms of cryptography (Zimmerman)." The most important benefits of encryption have been conveniently overlooked by the government. If everyone used encryption, there would be absolutely no way that an innocent bystander could happen upon something they choose not to see. Only the intended receiver of the data could decrypt it (using public key cryptography, not even the sender can decrypt it) and view its contents. Each coded message also has an encrypted signature verifying the sender's identity. The sender's secret key can be used to encrypt an enclosed signature message, thereby "signing" it. This creates a digital signature of a message, which the recipient (or anyone else) can check by using the sender's public key to decrypt it. This proves that the sender was the true originator of the message, and that the message has not been subsequently altered by anyone else, because the sender alone possesses the secret key that made that signature. "Forgery of a signed message is infeasible, and the sender cannot later disavow his signature (Zimmerman)." Gone would be the hate mail that causes many problems, and gone would be the ability to forge a document with someone else's address. The government, if it did not have alterior motives, should mandate encryption, not outlaw it. As the Internet continues to grow throughout the world, more governments may try to impose their views onto the rest of the world through regulations and censorship. It will be a sad day when the world must adjust its views to conform to that of the most prudish regulatory government. If too many regulations are inacted, then the Internet as a tool will become nearly useless, and the Internet as a mass communication device and a place for freedom of mind and thoughts, will become non existent. The users, servers, and parents of the world must regulate themselves, so as not to force government regulations that may stifle the best communication instrument in history. If encryption catches on and becomes as widespread as Phil Zimmerman predicts it will, then there will no longer be a need for the government to meddle in the Internet, and the biggest problem will work itself out. The government should rethink its approach to the censorship and encryption issues, allowing the Internet to continue to grow and mature. Works Cited Emler-Dewitt, Philip. "Censoring Cyberspace: Carnegie Mellon's Attempt to Ban Sex from it's Campus Computer Network Sends A Chill Along the Info Highway." Time 21 Nov. 1994; 102-105. Lehrer, Dan. "The Secret Sharers: Clipper Chips and Cypherpunks." The Nation 10 Oct. 1994; 376-379. "Let the Internet Backlash Begin." Advertising Age 7 Nov. 1994; 24. Levy, Steven. "The Encryption Wars: is Privacy Good or Bad?" Newsweek 24 Apr. 1995; 55-57. Miller, Michael. "Cybersex Shock." PC Magazine 10 Oct. 1995; 75-76. Wilson, David. "The Internet goes Crackers." Education Digest May 1995; 33-36. Zimmerman, Phil. (1995). Pretty Good Privacy v2.62, [Online]. Available Ftp: net-dist.mit.edu Directory: pub/pgp/dist File: Pgp262dc.zip f:\12000 essays\technology & computers (295)\Internet security.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Internet Security Many people today are familiar with the Internet and its use. A large number of its users however, are not aware of the security problems they face when using the Internet. Most users feel they are anonymous when on-line, yet in actuality they are not. There are some very easy ways to protect the user from future problems. The Internet has brought many advantages to its users but has also created some major problems. Most people believe that they are anonymous when they are using the Internet. Because of this thinking, they are not careful with what they do and where they go when on the "net." Security is a major issue with the Internet because the general public now has access to it. When only the government and higher education had access, there was no worry about credit card numbers and other types of important data being taken. There are many advantages the Internet brings to its users, but there are also many problems with the Internet security, especially when dealing with personal security, business security, and the government involvement to protect the users. The Internet is a new, barely regulated frontier, and there are many reasons to be concerned with security. The same features that make the Internet so appealing such as interactivity, versatile communication, and customizability also make it an ideal way for someone to keep a careful watch on the user without them being aware of it (Lemmons 1). It may not seem like it but it is completely possible to build a personal profile on someone just by tracking them in cyperspace. Every action a person does while logged onto the Internet is recorded somewhere (Boyan, Codel, and Parekh 3). An individual's personal security is the major issue surrounding the Internet. If a person cannot be secure and have privacy on the Internet, the whole system will fail. According to the Center for Democracy and Technology (CDT), any website can find out whose server and the location of the server a person used to get on the Internet, whether his computer is Windows or DOS based, and also the Internet browser that was used. This is the only information that can be taken legally. However, it can safely be assumed that in some cases much more data is actually taken (1). These are just a few of the many ways for people to find out the identity of an individual and what they are doing when on the Internet. One of the most common ways for webmasters to find out information about the user is to use passive recording of transactional information. What this does is record the movements the user had on a website. It can tell where the user came from, how long he stayed, what files he looked at, and where he went when he left. This information is totally legal to obtain, and often the webmaster will use it to see what parts of his site attracts the most attention. By doing this, he can improve his site for the people that return often (Boyan, Codel, and Parekh 2). There is a much more devious way that someone can gain access to information on a user's hard-drive. In the past, the user did not need to be concerned about the browser he used; that changed when Netscape Navigator 2.0 was introduced. Netscape 2.0 takes advantage of a programming language called Java. Java uses the browser to activate programs to better enhance the website the user was viewing. It is possible for someone to write a program using Java that transfers data from the user's computer back to the website without the user ever being aware of anything being taken. Netscape has issued new releases that fix some but not all of the two dozen holes in the program (Methvin 3). Many people do not realize that they often give information to websites by doing something called direct disclosure. Direct disclosure is just that, the user gives the website information such as their e-mail address, real address, phone number, and any other information that is requested. Often, by giving up information, a user will receive special benefits for "registering" such as a better version of some software or being allowed into "member only areas" (Boyan, Codel, and Parekh 2). E-mail is like a postcard. E-mail is not like mailing a letter in an envelope. Every carrier that touches that e-mail can read it if they choose. Not only can the carriers see the message on the e-mail, but it can also be electronically intercepted and read by hackers. This can all be done without the sender or the receiver ever knowing anything had happened (Pepper 1). E-mail is the most intriguing thing to hackers because it can be full of important data from secret corporate information to credit card numbers (Rothfeder, "Special Reports" 2). The only way to secure e-mail is by encryption. This makes an envelope that the hacker cannot penetrate. The downside to using encryption on a huge network like the Internet is that both users must have compatible software (Rothfeder, "Special Reports" 2). A way to protect a persons e-mail is to use an autoremailer. This gives the sender a "false" identity which only the autoremailer knows, and makes it very difficult to trace the origin of the e-mail (Boyan, Codel, and Parekh 4). Another but more controversial way of gathering data is by the use of client-side persistent information or "cookie" (Boyan, Codel, and Parekh 2). Cookies are merely some encoded data that the website sends to the browser when the user leaves the site. This data will be retrieved when the user returns at a later time. Although cookies are stored on the user's hard-drive, they are actually pretty harmless and can save the user time when visiting a web site (Heim 2). Personal security is an important issue that needs to be dealt with but business security is also a major concern. "An Ernst and Young survey of 1271 companies found that more than half had experienced computer-related break-ins during the past two years; 17 respondents had losses over $1 million" ("November 1995 Feature"). In a survey conducted by Computer Security and the FBI, 53 percent of 428 respondents said they were victims of computer viruses; 42 percent also said that unauthorized use of their systems had occurred within the last 12 months (Rothfeder, "November 1996 Feature" 1). While electronic attacks are increasing more rapidly than any other kind, a large number of data break-ins are from the inside. Ray Jarvis, President of Jarvis International Intelligence, says "In information crimes, it's not usually the janitor who's the culprit. It's more likely to be an angry manager who's already looking ahead to another job"(Rothfeder, "November 1996 Feature" 3). While electronic espionage is increasing, so is the ability to protect computer systems. "The American Society for Industrial Security estimates that high-tech crimes, including unreported incidents, may be costing U.S. corporations as much as $63 billion a year" (Rothfeder, "November 1996 Featuer" 1). There are many ways for businesses to protect themselves. They can use a variety of techniques such as firewalls and encryption. Firewalls are one of the most commonly used security devices. They are usually placed at the entrance to a network. The firewalls keep unauthorized users out while admitting authorized users only to the areas of the network to which they should have access. There are two major problems with firewalls, the first, is that they need to be installed at every point the system comes in contact with other networks such as the Internet (Rothfeder, "November 1996 Feature" 5). The second problem is that firewalls use passwords to keep intruders out. Because of this, the firewall is only as good as the identification scheme used to log onto a network (Rothfeder, "November 1996 Feature" 2). Passwords, a major key to firewalls, are also the most basic of security measures. The user should avoid easily guessable passwords such as a child's name, birthdate, or initials. Instead, he should use cryptic phrases and combine the use of small and capitalized letters such as "THE crow flys AT midnight". Another easy way to avoid problems is to change the password or phrase at least once a month (Rothfeder, "November 1996 Feature" 5). Just in case an intruder does get through the first layer of security, a good backup is to have all the data on the system encrypted. Many browsers come with their own encryption schemes, but companies can buy their own stand-alone packages as well. Most encryption packages are based on a public-private key with their own private encryption key to unlock the code for a message and decipher it. Encryption is the single best way to protect data from being read, if stolen, and is rather cost effective (Rothfeder, "November 1996 Feature"5). Businesses need protection but they cannot do it alone. The Federal government will have to do its part if the Internet is going to give us all the returns possible. Businesses will not use the Internet if they do not have support from the government. In the United States there is no set of laws that protect a person's privacy when on the Internet. The closest rules that come to setting a standard of privacy is an assortment of laws beginning with the Constitution and continuing down to local laws. These laws unfortunately, are not geared for the Internet. These laws are there only to protect a person's informational privacy (Boyan, Codel, and Parekh 3). Now, because of the booming interest and activity on the Internet in both the personal and the business level, the government has started investigating the Internet and working on ways to protect the users. The Federal Bureau of Investigation (FBI), the Central Intelligence Agency (CIA), and the National Security Agency have all devoted small units to fighting computer security crimes. After Senate hearings, the Justice Department proposed that a full-time task force be set up to study the vulnerability of the nations informational infrastructure. This would create a rapid-response team for investigating computer crimes. They also proposed to require all companies to report high-tech break-ins to the FBI (Rothfeder, "November 1996 Feature" 4). Security for the Internet is improving, it is just that the usage of the Internet is growing much faster. Security is a key issue for every user of the Internet and should be addressed before a person ever logs on to the "net". At best, all users should have passwords to protect themselves, any businesses need to put up firewalls at all points of entry. These are low cost security measures which should not be over looked in a possible multi-billion dollar industry. Works Cited Boyan, Justin and Eddie Codel and Sameer Parekh. Center for Democracy and Technology Web Page. Http://www.13x.com/cgi-bin/cdt/snoop.pl accessed January 26, 1997: 1-4. Heim, Judy. "Here's How." PC World Online January 1997: 1-3. Methvin, David W. "Safety on the Net." Windows Magazine Online (1996): 1-9. Lemmons, Phil. "Up Front." PC World Online February 1997: 1-2. November, 1995 Feature PC World Online November 1995 1-3. Pepper, Jon. "Better Safe Than Sorry." PC World Online October 1996: 1-2 Rothfeder, Jeffrey. "February 1997 Special Report." PC World Online February 1997: 1-6 Rothfeder, Jeffrey. "November 1996 Features." PC World Online November 1996: 1-6 f:\12000 essays\technology & computers (295)\internet servers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ anonimous CGS 1000 Assignment 1 02/25/97 Internet Servers Here is a list of four common and not so common Internet servers, the list contains the basic features pushed by the servers, price of the service, and the basic software and hardware requirements to access the Internet. THE MICROSOFT NETWORK: Features: the MSN has a very user friendly environment, everything is done in an easy customized format, customers can send and receive e-mail, access to chat rooms, specialized interest forums, and news groups. They also offer online shopping and the MSN BC on line news. Cost: To start the Microsoft Network has a free "one month" unlimited access trial plan, after that period of time they will start the first billing period. With the MSN the customers have a few choices on how they want to pay their bills and get their access, such as: MSN premier plan: It is the easy way to get the best of the Internet, the premier plan provides exclusive MSN programming , plus five hours to explore everything the MSN and the Internet has to offer, for $6.95 per month plus $2.50 each additional hour. MSN premier annual plan: the customers get twelve months of service for the price of ten, a single payment of $69.50 gives the annual membership, with this plan the customer still has to pay $2.50 for each additional hour after the first five hours each month. MSN premier unlimited plan: the premier unlimited access gives everything the MSN and Internet has to offer for a flat rate of $19.95 per month. No hourly charges. Customer support: Service is very friendly and helpful if you are available to hold for up to twenty minutes, otherwise you are not going to be able to get in touch with them. Hardware and software requirements: -PC with 486dx or higher processor. -Windows 95. -CD-ROM drive. -14.4 kbps modem. -Mouse or compatible pointing device. -VGA or higher resolution graphics card. -8 Mb of memory required, 16 Mb recommended. -50 Mb of additional hard disk space. -Sound card recommended. -Software is provide by the MSN free of charge. Comments: Very user friendly interface. The new MSN is a little slower than the older version, because it has more graphics to load, the customer support is reachable after a substantial wait on hold, thereafter the people from MSN are very helpful. NETCOM: NETCOM is a nationwide company. Their emphasis is on business and productivity seeking individuals, NETCOM subscribers get unlimited access to the Internet over a high speed digital network. It also offers over 330 access points nationwide. They only have one unlimited $19.95 access rate available for the moment. Netcom claims that they have a very user friendly software. For a beginner using Netcom, it will not take that person more than ten minutes to have it up and browsing. Packaged browsers include Netscape Navigator, Microsoft Internet explorer, and their own NETCOMPLETE browser. NETCOM also partners with various software companies, such as McAfee web scan, Eudora pro, Easy Photo, and Surf watch to name just a few. This company also claims that it is easy to send and receive e-mail. With their own Internet kit customers can connect to news groups, chat rooms, and web sites. If a customer encounters any kind of problem related to NETCOM, there is a customer support available seven days a week, and twenty four hours a day at no charge. Over the phone the customers will be dealing with an automated system, and to deal with a person it has to be done via e-mail. Main features: Personal Services Portfolio: It is offered to netcom customer as an extension of their base subscription rate of $19.95 per month. Some of these services are offered free of charge, and others require a nominal service fee. The current are the available pieces to the personal service portfolio. Personal pages: It will enable a customer to create a home page on the world wide web at no additional cost. On this feature there is also available a tutorial to walk trough each step of the home page making. Two ways to get news: The Personal News page direct will enable the customer to receive an e-mail of the top twenty headlines and summaries based on the news profiles predefined by the customer, and full text of the stories can be found on the news page web site. The other method to retrieve the news will be through the News Web Site. Up to the minute news feeds on the web, the customers can browse the top ten stories or make use of the Clarinet Newsfeeds wich are searchable by key word and category. Personal Finances: is a set of customized financial foots, design to provide the means to make intelligent investment decisions. Information is available on over 77,000 stocks, mutual funds, options, and industry groups, the customer can get a listing of the best and worst mutual funds, and set up a personal portfolio with up to 150 entries. Surf Watch: Allows the customer to block certain material on the Internet, this service is specially beneficial to parents who will like to keep their kids away from certain material on the Internet. It can also block the WWW, FTP, Gopher, IRC, and other sites likely to contain objectionable material. Hardware and Software requirements: -PC with a 486dx or higher processor. -Microsoft windows 3.1 or Windows 95 operating system. -CD-ROM drive. -Above 9600 baud modem. -Mouse or compatible pointing device. -VGA or higher resolution graphics card. -8 MB of memory required. -15 MB of additional hard disk space. -The software required is provided by NETCOM. Comments: Customer support did not take more than three minutes to pick up the phone, and they were very helpful, I am not familiar with their interface, but judging from the looks of it seems to be very easy to use. COMPUSERVE To start with COMPUSERVE, you get thirty days and ten hours to explore for free, the free month includes e-mail, news, weather, stock quotes, Internet access, and hundred of special interest forums, from cats and dogs to entertainment. The price plan that they are trying to push is $9.95 for a month with five hours, each additional hour will cost $2.95 per hour, that is called the standard plan, they also have another plan called the super value plan that costs 24.95 per month with twenty hours, each additional hour costs $2.95 per hour. Main features: Redesigned Interface: COMPUSERVE features a redesigned interface called COMPUSERVE 3.0, the new extensively tested user Interface helps members to find contents and features more quickly and easily, this interface can even be customized. Multitasking: COMPUSERVE claims that this feature will save their customers time and money, the customer does not have to wait for a task to be finished to move on to another, for example, the customer can chat while downloading a file. A to do list enables start up to do multiple tasks in a background session, making it more efficient than ever to retrieve files on line. COMPUSERVE forums: COMPUSERVE forums are gathering places of people with similar interests, such as animals and fish, home computing, health or business. The forum conference room is a more intimate chat room environment than the conference center, and more interest specific than the chat sites. COMPUSERVE no modem e-mail: The COMPUSERVE communication card lets the customers use any telephone to listen to e-mail via text-to-voice synthesizer, the COMPUSERVE card also allows to forward e-mail messages to a fax machine, receive voice mail, set up conference calls, use speed dial, and access information services, such as news and travel. When it is used as a traditional calling card, the COMPUSERVE card allows savings up to 58% compared to the rates of other calling cards. Hardware and software requirements: -486 DX or higher processor. -300 baud modem with local access. -8 MB of ram memory. -10 MB of additional disk space. -Mouse or equivalent. -VGA or higher graphic resolution. -windows. -Free software is provided by COMPUSERVE. THE LIGHTHOUSE CONNECTION Features: News groups and chat channels are available, as well as free e-mail addresses, TLC offers their customers free disk space for their own web page at no additional cost. The local access number is a free local call for Orange county and most of Seminole county, and TLC's modems are all 28.8k bps or faster. TLC's support is available Monday through Friday from 9:30 am to 7:30 PM, and Saturday from 10:00 am to 7:30 PM. Rates: A flat rate of $10.95 per month for unlimited access service. Hardware and software requirements: -PC with a 486 dx or higher processor. -VGA graphics card. -mouse or other pointing device. -9600 bps modem or faster. -Windows. -8 MB of RAM memory. -10 MB free hard disk space. -Free software is provided by TLC. Comments: I found that TLC's customer support was very helpful , they walked me through the installation process, that I found it to be more complicated than than the one of any other server, their system seemed to be faster than any other that I have tried, but their interface was not user friendly at all, TLC and their users have to give up comfort for cost. Service Recommendation: Personally I will go with TLC, because once I get used to it, it will be just like any other server, I will get the same basic services for half the price. f:\12000 essays\technology & computers (295)\Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Internet MEMORANDUM Mrs. -----, I understand that some students that have already graduated from College are having a bit of trouble getting their new businesses started. I know of a tool that will be extremely helpful and is already available to them; the Internet. Up until a few years ago, when a student graduated they were basically thrown out into the real world with just their education and their wits. Most of the time this wasn't good enough because after three or four years of college, the perspective entrepreneur either forgot too much of what they were supposed to learn, or they just didn't have the finances. Then by the time they save sufficient money, they again had forgotten too much. I believe I have found the answer. On the Internet your students will be able to find literally thousands of links to help them with their future enterprises. In almost every city all across North America, no matter where these students move to, they are able to link up and find everything they need. They can find links like "Creative Ideas", a place they can go and retrieve ideas, innovations, inventions, patents and licensing. Once they come up with their own products, they can find free expert advice on how to market their products. There are easily accessible links to experts, analysts, consultants and business leaders to guide their way to starting up their own business, careers and lives. These experts can help push the beginners in the right direction in every field of business, including every way to generate start up revenue from better management of personal finances to diving into the stock market. When the beginner has sufficient funds to actually open their own company, they can't just expect the customers to come to them, they have to go out and attract them. This is where the Internet becomes most useful, in advertising. On the Internet, in every major consumer area in the world, there are dozens of ways to advertise. The easiest and cheapest way, is to join groups such as "Entrepreneur Weekly". These groups offer weekly newsletters sent all over the world to major and minor businesses informing them about new companies on the market. It includes everything about your business from what you make/sell and where to find you, to what your worth. These groups also advertise to the general public. The major portion of the advertising is done over the Internet, but this is good because that is their target market. By now, hopefully their business is doing well, sales are up and money is flowing in. How do they keep track of all their funds without paying for an expensive accountant? Back to the Internet. They can find lots of expert advice on where they should reinvest their money. Including how many and how qualified of staff to hire, what technical equipment to buy and even what insurance to purchase. This is where a lot of companies get into trouble, during expansion. Too many entrepreneurs try to leap right into the highly competitive mid-size company world. On the Internet, experts give their secrets on how to let their companies natural growth force its way in. This way they are more financially stable for the rough road ahead. The Internet isn't always going to give you the answers you are looking for, but it will always lead you in the right direction. That is why I hope you will accept my proposal and make aware the students of today of this invaluable business tool. ?? f:\12000 essays\technology & computers (295)\Internet1.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Internet has an enormous impact on the American Experience. First, It encourages the growth of businesses by providing new ways of advertising products to a large audience, and thus helps companies to publicize their products. Secondly, It allows more Americans to find out what goes on in other countries by learning about other cultures and by exchanging their opinions and ideas with other people worldwide. This may well promote a better global understanding. Finally, by allowing people to access vast amounts of information easily, it will change how they make decisions and ultimately also their lifestyle. The Internet is a high-speed worldwide computer network which evolved from the Arpanet. The Arpanet was created by the Pentagon in the late 1969 as a network for academic and defense researchers. In 1983, the National Science Foundation took over the management of the Internet. Now the Internet is growing faster than any other telecommunications system ever built. It is estimated that in three years, the system will be used by over 100 million people (Cooke 61). Since the World Wide Web (WWW or W3) became popular through point-and-click programs that made it easier for non-technical people to use the Internet, over 21,000 businesses and corporations have become accessible through the Internet (Baig 81). These companies range from corporate giants like IBM, AT&T, Ford and J.C. Penny, to small law firms. "With the Internet, the whole globe is one marketplace and the Internet's information-rich WWW pages can help companies reach new customers," says Bill Washburn, former executive director of Commercial Internet Exchange (Baig 81). Through the Internet, new opportunities to save money are created for companies. One of the bigger savings is the cost of transmission. It is estimated that the administrative cost of trade between companies in the U.S. amounts to $250 billion a year (Liosa 160). Sending an ordinary one-page e-mail message from New York to California via the Internet costs about a penny and a half, vs. 32 cents for a letter and $2 for a fax (Liosa 158). Hale & Dorr for example, a Boston based law firm, uses the Internet to its advantage. If a client company requests a contract for a foreign distributor, it can send electronic mail over the Internet to a Hale & Dorr computer, where a draft document will be constructed from the text. A lawyer will then review the documents and ship them back over the Internet to the client, including a list of lawyers in the other country (Verity 81). The ability to process orders quickly has always been an important factor in the business world, especially for mail-order companies. Traditional methods however tended to be fairly expensive. On the average it has cost mail-order companies from $10 to $15 to process a telephone or mail order, says Rodney Joffe, president of American Computer Group Inc. Over the Internet, this cost falls to $4, and it is much faster this way, too (Verity 84). Advertising on the Internet is another way to endorse products. Hyatt Hotels Corporation for instance advertises its hotels and resorts, and it even offers a discount for people who say they 'saw it on the net (Verity 81).' Hundreds of computer software companies now have their own Internet sites on the World Wide Web, where customers can get immediate support directly from the experts or buy and register new software online. Even magazine publishers are joining the Internet to regularly publish special Internet versions of their magazines which are read by millions of people worldwide. The Internet attracts so many companies because they can use it as a tool for communication, marketing, advertising, sales, and customer support. It is not only faster and more efficient than using traditional methods, but it is also cheaper. The Internet doesn't just promote growth of businesses, it also creates new ways for Americans to get in touch with the rest of the world. It lets people expand their horizons and learn about different countries and cultures by getting insight into others people's lives across the globe. One of the many ways in which this can be done is to use Internet Relay Chat (IRC). IRC is a multi-user chat system, where people worldwide can convene on "channels" (a virtual place, usually with a topic of conversation) to talk in groups, or privately. When people talk on IRC, everything they type will instantly be transmitted around the world to other users who are connected at the time. They can then type something and respond to each other's messages. Since starting in Finland, IRC has been used in over seventy-five countries spanning the globe. IRC is networked over much of North America, Europe, and Asia (Eddings 57). Topics of discussion on IRC are varied. Technical and political discussions are popular, especially when world events are in progress. Not all conversations need to have a topic however. Some people simply talk about their daily lives and experiences which they can share with thousands of other people. Most conversations are in English, but there are always channels in German, Japanese, and Finnish, and occasionally other languages. On the average, there are between five and six thousand people from many countries and cultures online at once. In times when information from abroad is hard to acquire, it becomes clear how essential the Internet can be to global understanding. IRC gained international fame during the late Persian Gulf War, where updates from around the world came across the wire, and most people on IRC gathered on a single channel to hear these reports. Even during the coup attempt in Russia, people were providing live reports on the Internet about what was really going on (Eddings 48). These reports were widely circulated throughout the world over the Internet. One startling instance that shows the importance of international communication through the Internet, is taking place in Croatia. Halfway around the world, Wam Kat regularly types articles on the political situation and daily life in Zagreb, Croatia on his computer. Kat's articles are not published in Yugoslav papers or magazines because the Croatian government owns all the media and already prosecuted a group of journalists for treason. Kat's articles exist in cyberspace only. He transfers them to a German Bulletin Board System via modem, from where they are spread to computers worldwide through the Internet. "Electronic mail is the only link between me and the outside world," says Kat (Cooke 60). Kat is not the only one who participates in this community without boundaries. During recent coup attempts and catastrophes around the world, like the earthquake in Japan for example, the Internet provided and instant unfiltered link to the rest of the world. The Internet is changing the way people relate to one-another. It is re-sorting society into "virtual communities," as one author calls it (Cooke 61). Now groups of people from a variety of cultures, religions and countries can meet on the Internet, exchange ideas and learn from each other, instead of being bound by geographical location. Although the Internet already has an enormous impact on Americans right now, it will influence us even more in the near future. In 1994, the Clinton administration requested a National Information Infrastructure, which would link every business, home, school and college (Cooke 64). That is why the Clinton administration has made the building of an improved data highway the main component of a determined plan to strengthen the U.S. economy in the 21st century (Silverstein 8). This improved national computer network will be called The Information Superhighway, which is nothing but an improved version of the Internet with a much greater capability for transmitting data. "The world is on the eve of a new era. The Information Superhighway will be crucial in creating long-term economic growth and maintaining U.S. leadership in basic science, mathematics and engineering," says Vice President Al Gore, the Clinton administration's leading high-tech advocate (Silverstein 9). The Information Superhighway will make it possible to merge today's broadcasting, 500-channel cable TV, general video, telephone, and computer industries all into one giant computer network, because it will have a much greater capacity than today's Internet. This is made possible by replacing ordinary telephone wires with fiberoptic cable, which is made up of hair-thin strands of glass and can transmit 250,000 times as much data as a conventional telephone wire (Silverstein 9). Through the Information Superhighway, our everyday living standards will be greatly improved. While the Internet primarily moves words, and is only able to broadcast images and sound at a very slow rate, the Information Superhighway will easily allow us to transmit sound and images quickly, making real-time video conferencing and actual spoken conversations on the computer possible for people worldwide. New technology like this will introduce even more practical and convenient applications. "Virtual Medicine" for example could help save people's lives. If it is very difficult for a patient to get to a medical specialist, surgery could be performed over the Information Superhighway, through what is called Tele-presence Surgery. To be successful, It requires video, a fine motor control, a tactile, and physical feedback. The information can be digitized and transmitted over the Information Superhighway. The doctor will wear virtual reality goggles which contain small video screens that create a 3D-image of the patient. Sensors in the doctor's gloves, which will control robot-like hands on the other end, will detect the position of the doctors fingers (Eddings 156). Since this method of surgery is intended to work between two distant sites, it makes it possible for specialized doctors at major hospitals to operate at rural clinics. The so-called Virtual Library, which will be established once the Information Superhighway is inaugurated, will greatly enhance the amount of information that can be accessed through computers. Already, people can search the Internet for databases of newspaper clippings, lists of government offices, supreme court rulings, and even get limited access to the Library of Congress through a system called MARVEL, which pulls together library catalogs from all over the world into one super catalog (Eddings 158). With the Information Superhighway, people will be able to retrieve even more massive amounts of information. In the future, Instead of going to the library and checking out books, people will simply turn on their home computer, log into another library mainframe computer, and be able to download large amounts of text, as they wish. Especially for institutions like schools and colleges, the Information Superhighway will have a great potential for the improvement of general education and the accessibility of important information. The Internet is having a major influence on America. Its successor in the near future, the Information Superhighway will continue to do so for a long time as well. By creating new ways of publicizing products and helping businesses, the Internet has strengthened and reinforced the U.S. economy. It also promotes a better global understanding by allowing millions of Americans to communicate with other people on an international level because it provides a constant flow of instant, unbiased information for everyone at any time, anywhere. The ability to obtain information quickly and easily will become very essential in the future, now that America is entering the information age. The Information Superhighway, once built, promises a good start into the new era. Eddings, Joshua. How the Internet Works. California: Ziff-Davis Press, 1994. Cooke, Kevin. "The whole world is talking." Nation. July 12, 1993: 60-65. Verity, John. "The Internet." Business Week. November 14, 1994: 80-88. Silverstein, Ken. "Paving the Infoway." Scholastic Update. September 2, 1994: 8-10. Liosa, Patty. "Boom time on the new frontier." Fortune. Autumn93, 1993: 153-161. f:\12000 essays\technology & computers (295)\Intranets.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Abstract These days Intranets are becoming more and more popular throughout the business world and other types of organizations. Many companies and organizations are already making this change and many more are considering it. The advantages offered by Intranets when compared to other types of networks are many, at a reduced cost for the owner. Less maintenance, less programming and more flexibility on the network platform make this change interesting. Unlike other types of networks, Intranets allow different types of machines and operating systems already at hand to be operating on the same network platform. This reduces the cost when trying to implement this type of network because the machines and operating systems already at hand can still be used throughout the network without conflicting with one another. Quick access and easy programming is also another consideration that is made when considering this type of network. Intranets have just started to be implemented throughout the world and already a big change is being noticed. Companies are keeping track of all of their important information on web sites, which are restricted to users, unless they have the security code to access them. Thanks to Internet technology, companies and other types of organizations are able to keep all of their information organized and easily accessible with a click of a button. The Internet, how has it changed the world around us? Government, education, business is all wrapping around it. Is this because of all of the information on it, simplicity or is it the quickness, with a simple point and click and the information appears on the screen. The first intention of the Web, as it is referred to, was not to create a sea of web servers and surfers. The Department of Defense created it for it's own use, to keep contact with all of the locations throughout the world, making it easier for them to retrieve and send information when desired. As businesses, government and education discover the advantages of the Internet and web technologies they are starting to implement it for internal use. This is better known as an Intranet, which represents a new model for internal information management, distribution and collaborative computing and offers a simplistic but powerful implementation of client/server computing. Intranets are private Web-based networks, usually within corporation firewalls, that connect employees and business partners to vital corporate information. Thousands of organizations are finding that Intranets can help empower their employees through more timely and less costly information flow. They let companies speed information and software to employees and business partners. Intranets provide users with capabilities like looking up information, sending and receiving e-mail, and searching directories. They make it easy to find any piece of information or resources located on the network. Users can execute a single query that results in an organized list of all matching information across all servers throughout the enterprise and onto the Internet. As recent as two years ago, Intranets didn't exist. Now the market for internal web-servers is rapidly increasing. Intranet technology is beginning to be used all over the world. Intranets present the information in the same way to every computer. By doing this they are doing what computer and software makers have been promising but never actually delivered. Computers, software, and databases are pulled together into a single system that enables users to find information wherever it resides. Intranets are only logically "internal" to an organization but physically they can span the globe, as long as access is limited to a defined community of interests. Countless organizations are beginning to build Intranets, bringing Internet and Web technologies to bear on internal organizational problems traditionally addressed by proprietary data base groupware and workflow solutions. Two-thirds of all large companies either have an internal web server installed of they are considering installing one. The organizations to use Internet technologies on the corporate network generally move traditional paper-based information distribution on-line. Other types of information that might be put on-line would be the following: · competitive sales information · human resources/employee benefits statements · technical support/help desk applications · financial · company newsletters · project management These companies typically provide a corporate home page as a way for employees to find their way around the corporate Intranet site. This page may have links to internal financial information, marketing, manufacturing, human resources and even non-business announcements. It may also have links to outside sites such as client home pages or other sites of interests. Both the Internet and Intranets, center around TCP/IP (Transmission Control Protocol/Internet Protocol) applications. These applications are used for the transport of information for both wide area and local area. Enterprise networks nowadays are a mixture of many protocols. The most popular ones being IPX, IP, SNA and many others. This is all beginning to change by replacing these protocols with one typically being the IP Protocol. IP can handle both LAN and WAN traffic, it is supported by the majority of computing platforms from Macintoshes to Windows NT to the largest mainframe and on top of it all it is the protocol used by the Internet. There are three types of protocols considered under the TCP/IP applications. These are FTP (File Transfer Protocol), SMTP (Simple Mail Transport Protocol) and HTTP (Hypertext Transport Protocol). HTTP, or Hypertext Transport Protocol, is a newer Internet Protocol designed expressly for the rapid distribution of hypertext documents. It uses minimum network bandwidth, in addition, its simplicity makes it easier to design and implement an HTML server or client browser. Once a server is set up, almost everybody can create web pages. From top managers to employees are all able to create web pages with the use of HTML, which is the World Wide Web universal language format. Converting documents into HTML format is getting easier and easier with the use of new programs that do everything for one. This is considered another big advantage of using web technology because fewer programmers are required to maintain it therefor reducing the expenses of a company. Intranets allow the programmers to make one copy of any information and run it anywhere, even across both client and server platforms. But why is this internal web so popular? There are typically three main reasons. First, all internal webs contain text and non-text items, for example recorded speech, graphics and even video clips. This allows the users to listen to speeches, watch video clips and look at graphics ranging from pictures to graphs, etc. Second, web sites can contain all types of information, depending on the content, author and effort put in to them. Companies are able to make pages referring to the employee's payroll, to company sales, client contracts, and many others without limitation. Finally, each Intranet web server can be cross-linked to others, by means of hypertext links, whether they are located around the world or just down the street. It is the ability that gives the Intranet its power, and its attraction to many corporations. Intranets are easy to implement, unlike most other types of networks Intranets don't require the replacement of all of the existing system, databases and applications. They embrace the already existing infrastructure investments, including desktop computers, servers, mainframes, databases, applications and networks. Other types of networks would not allow an organization to have different types of machines or operating systems on the same platform. For example on a LAN or WAN network one would not be able to use Macintosh computers on the same network as an IBM PC. These types of networks would also not allow the users to use different operating systems on different computers, on the other hand Intranets allow different types of machines and operating systems to be used on the same platform. Security is also a big factor on Intranets. Protecting information on a private network is critical. Intranets security services provide ways for resources to be protected against unauthorized users, for communications to be encrypted and authenticated, and for the integrity of information to be verified. Corporations can issue and manage a security key infrastructure to give their employees the ability to conduct, company business securely across the network. The full potential of Intranet technologies is far from being realized. Over the next few years or so, Intranets will be enhanced with new services that will make them the prime priority for any organization. Many companies and organizations are already changing to Intranets, but as Intranets are becoming more and more popular many more will convert their LANs and WANs to Intranets because of all of the benefits they offer. Money is a big factor when deciding the change from an already existing network but when considering Intranets this usually expensive change is drastically reduced making it very interesting for companies to consider. Bibliography Cortese, Amy. "Here Comes The Internet." Business Week 26 Feb. 1996: 3 Carr, Jim. "Intranets deliver Internet technology can offer cheap, multiplatform access to corporate data on private networks." Info World 19 Feb. 1996: 20 Strom, David. "Creating Private Intranets: Challenges and Prospects for IS" Internet address: http://www.strom.com/pubwork/intranetp.html taken on Feb. 10, 1997: 1-8 f:\12000 essays\technology & computers (295)\Introduction to Computers Question Sheet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Tutorial Question Sheet for the IBM PS/2 Name/Date:___________________________________ Section One: Short Answer Questions 1. What are the maximum amount of letters/characters that can be in a filename? ________________________________________________________________________________________________________________________________________________ 2. What are the main ways of exiting a program? ________________________________________________________________________________________________________________________________________________ 3. What is the USUAL capacity of a standard, high density 3.5", formatted diskette? ________________________________________________________________________ 4. Give three (3) examples of removable storage media. ________________________________________________________________________________________________________________________________________________ 5. Give two (2) ways of adding peripherals to your computer. ________________________________________________________________________________________________________________________________________________ 6. Give three (3) examples of output devices. 7. Give three (3) examples of input devices. ________________________________________________________________________ 8. What is an OS? ________________________________________________________________________________________________________________________________________________ 9. What does LAN stand for. ________________________________________________________________________ 10. BONUS QUESTION: WHAT DOES "MODEM" STAND FOR? ________________________________________________________________________ Section Two: The Computer, Inside and Out: Short Answer 1. What is the "brain" of your computer? ________________________________________________________________________ 2. Define RAM. 3. Define ROM. 4. Give two (2) major areas of the keyboard. 5. What purpose does the microprocessor serve? ________________________________________________________________________ 6. Give two (2) examples of memory. ________________________________________________________________________ 7. Hardware + Sofware=___________________________________________________ 8. What purpose do the cursor keys serve? ________________________________________________________________________ 9. What purpose does the NUM LOCK key serve? ________________________________________________________________________ 10. What is Hardware? ________________________________________________________________________ Section Three: True or False Circle the correct letter. 1. T or F: Computers are perfect. 2. T or F: Computers can think for themselves. 3. T or F: Computers are going to take over all jobs previously done by humans. 4. T or F: Some computers become obsolete quickly. 5. T or F: Computers can process data quickly and efficiently. 6. T or F:Computers will eventually turn against computers because they are so smart. 7. T or F: Computers are used only in businesses for hard tasks. 8. T or F: Piracy of computer software is okay, everybody does it. 9. T or F: The microcomputer is the smallest of these: micro, mini, mainframe. 10. T or F: A computer is a high speed machine which performs arithmetic, makes comparisons, and remembers what it has done. 11. T or F:A mainframe computer is a computer which keeps track of all physical characteristics of the world and prints the results daily. Section Four: Multiple Choice 1. Analog computers are: a. machines capable of following instructions step by step with calculations b. devices which measure physical quantities such as temperature and air pressure c. books d. toothbrushes e. both a+b 2. Software is: a. all instructions that make a typewriter work b. all instructions that tells a person what to wear c. all instructions that make a computer work in a required manner d. both a+b+c e. there is no E 3. Hardware is: a. all the electronic or mechanical parts used to give instructions to people b. all the electronic or mechanical parts that make up a computer c. all the electronic or mechanical parts that make a computer think for itself d. all of the above e. none of the above 4. A computer is: a. human b. a tool c. a machine d. a hybrid e. b+c f. c+e 5. A computer has 5 standard features: a. input, output, hardware, software, beachware b. input, output, printout, control, modem c. input, output, control, arithmetic logic, storage d. input and output, CRT, t.v., modem 6. 2 examples of input are: a. keyboard, printer b. printer, mouse c. scanner, monitor d. joystick, scanner 7. computer processing is: a. work a food processor does b. work the microcomputer does c. work that the printer does 8. What is an input/output device in this list? a. monitor b. keyboard c. CPU d. disk drive e. a+c 9. When data is being read a copy is being sent to: a. the C.U. b. the B.A.U. c. the A.L.U. d. the C.L.U. e. the B.U. Section Five: Software 1. What type of software would you use for keeping track of finance records? 2. What type of software would you use to connect to another computer with a modem? ________________________________________________________________________ 3. What type of software would you use to keep track of a database? ________________________________________________________________________ 4. What would you use for typing a professional letter? ________________________________________________________________________ 5. What kind of software would you use for engineering? ________________________________________________________________________ 6. What type of software would you use to create a video game on your PC? ________________________________________________________________________ 7. What is multitasking? 8. What type of software would you use for faxing a document with a fax modem? ________________________________________________________________________ 9. What type of software would you use to create a document filled with pictures and text? ________________________________________________________________________ 10. What does a math co-processor do? Computer Tutorial Question Sheet for the IBM PS/2 Name/Date:___________________________________ Section One: Misc. Short Answer Questions 1. What are the maximum amount of letters/characters that can be in a filename? There can be 8 characters, a separator (.), and an optional extension (3)for a total of 12. 2. What are the main ways of exiting a program? Either clicking on the exit button with the mouse, pressing ESC, ALT-X or ALT-Q. 3. What is the USUAL capacity of a standard, high density 3.5", formatted diskette? Approximately 1.44 megabytes. 4. Give three (3) examples of removable storage media. Tape backup drive, removable hard drive (i.e. IOMEGA JAZ, SYQUEST EZDRIVE), CD-ROM, floppy disks, dat tapes etc. 5. Give two (2) ways of adding peripherals to your computer. Plug to an external port outside the computer, or take apart computer case and add inside expansion slot. 6. Give three (3) examples of output devices. Monitor, printer, speakers, web page etc. 7. Give three (3) examples of input devices. Keyboard, digital computer camera, retina scanner, flatbed scanner etc. 8. What is an OS? OS stands for OPERATING SYSTEM, which helps you navigate through, and manage files, computer resources and software on your computer. 9. What does LAN stand for. Local area network 10. BONUS QUESTION: WHAT DOES "MODEM" STAND FOR? MODEM stands for modulate/demodulate. Section Two: The Computer, Inside and Out: Short Answer 1. What is the "brain" of your computer? The CPU/MPU or Central Processing Unit. 2. Define RAM. Random Access Memory 3. Define ROM. Read Only Memory 4. Give two (2) major areas of the keyboard. Typing keys, computer keys, function keys, numeric keypad etc. 5. What purpose does the microprocessor serve? Accepts your requests and executes them. 6. Give two (2) examples of memory. RAM+ROM 7. Hardware + Sofware=FIRMWARE 8. What purpose do the cursor keys serve? To navigate through certain programs/to select certain areas on screen. 9. What purpose does the NUM LOCK key serve? To change the numeric keypad from cursor keys to numerals. 10. What is Hardware? Hardware is any of the components/devices/peripherals which make up the computer. Section Three: True or False Circle the correct letter. 1. T or F: Computers are perfect. 2. T or F: Computers can think for themselves. 3. T or F: Computers are going to take over all jobs previously done by humans. 4. T or F: Some computers become obsolete quickly. 5. T or F: Computers can process data quickly and efficiently. 6. T or F:Computers will eventually turn against computers because they are so smart. 7. T or F: Computers are used only in businesses for hard tasks. 8. T or F: Piracy of computer software is okay, everybody does it. 9. T or F: The microcomputer is the smallest of these: micro, mini, mainframe. 10. T or F: A computer is a high speed machine which performs arithmetic, makes comparisons, and remembers what it has done. 11. T or F:A mainframe computer is a computer which keeps track of all physical characteristics of the world and prints the results daily. Section Four: Multiple Choice 1. Analog computers are: a. machines capable of following instructions step by step with calculations b. devices which measure physical quantities such as temperature and air pressure c. books d. toothbrushes e. both a+b 2. Software is: a. all instructions that make a typewriter work b. all instructions that tells a person what to wear c. all instructions that make a computer work in a required manner d. both a+b+c e. there is no E 3. Hardware is: a. all the electronic or mechanical parts used to give instructions to people b. all the electronic or mechanical parts that make up a computer c. all the electronic or mechanical parts that make a computer think for itself d. all of the above e. none of the above 4. A computer is: a. human b. a tool c. a machine d. a hybrid e. b+c f. c+e 5. A computer has 5 standard features: a. input, output, hardware, software, beachware b. input, output, printout, control, modem c. input, output, control, arithmetic logic, storage d. input and output, CRT, t.v., modem 6. 2 examples of input are: a. keyboard, printer b. printer, mouse c. scanner, monitor d. joystick, scanner 7. computer processing is: a. work a food processor does b. work the microcomputer does c. work that the printer does 8. What is an input/output device in this list? a. monitor b. keyboard c. CPU d. disk drive e. a+c 9. When data is being read a copy is being sent to: a. the C.U. b. the B.A.U. c. the A.L.U. d. the C.L.U. e. the B.U. Section Five: Software 1. What type of software would you use for keeping track of finance records? Spreadsheet software 2. What type of software would you use to connect to another computer with a modem? Communications software 3. What type of software would you use to keep track of a database? Database software! 4. What would you use for typing a professional letter? Word-processing software 5. What kind of software would you use for engineering? CAD software such as AUTOCAD 6. What type of software would you use to create a video game on your PC? Programming software 7. What is multitasking? Multitasking is the ability to run multiple applications simultaneously. 8. What type of software would you use for faxing a document with a fax modem? Communications/fax software, yet again. 9. What type of software would you use to create a document filled with pictures and text? Desktop publishing software 10. What does a math co-processor do? It aids the CPU in performing complex, mathematical tasks. It helps speed things up. f:\12000 essays\technology & computers (295)\Is your information safe .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Is Your Information Safe? He doesn't wear a stocking mask over his face, and he doesn't break a window to get into your house. He doesn't hold a gun to your head, nor does he ransack your personal possessions. Just the same he's a thief. Although this thief is one you'll not only never see, but you may not even realize right away that he's robbed you. The thief is a computer hacker and he "enters" your home via your computer, accessing personal information -- such as credit card numbers -- which he could then use without your knowledge -- at least until you get that next credit card statement. Richard Bernes, supervisor of the FBI's Hi-Tech squad in San Jose, California, calls the Internet "the unlocked window in cyberspace through which thieves crawl" (Erickson 1). There seems to be an unlimited potential for theft of credit card numbers, bank statements and other financial and personal information transmitted over the Internet. It's hard to imagine that anyone in today's technologically oriented world could function without computers. Personal computers are linked to business computers and financial networks, and all are linked together via the Internet or other networks. More than a hundred million electronic messages travel through cyberspace every day, and every piece of information stored in a computer is vulnerable to attack (Icove-Seger-VonStorch 1). Yesterday's bank robbers have become today's computer hackers. They can walk away from a computer crime with millions of virtual dollars (in the form of information they can use or sell for an enormous profit). Walking away is precisely what they do. The National Computer Crimes Squad estimates that 85-97 % of the time, theft of information from computers is not even detected (Icove-Seger-VonStorch 1). Home computer users are vulnerable, not only for credit card information and login IDs, but also their files, disks, and other computer equipment and data, which are subject to attack. Even if this information is not confidential, having to reconstruct what has been destroyed by a hacker can take days (Icove-Seger-VonStorch 1). William Cheswick, a network-security specialist at AT&T Bell Labs, says the home computers that use the Internet are singularly vulnerable to attack. "The Internet is like a vault with a screen door on the back," says Cheswick. "I don't need jackhammers and atom bombs to get in when I can walk in through the door" (Quittner 44). The use of the Internet has become one of the most popular ways to communicate. It's easy, fun, and you don't have to leave your home to do it. For example, the advantage of not having to take the time to drive to the bank is so great that they never consider the fact that the information they store or transmit might not be safe. Many computer security professionals continue to speak out on how the lack of Internet security will result in a significant increase in computer fraud, and easier access to information previously considered private and confidential (Regan 26). Gregory Regan, writing for Credit World, says that only certain types of tasks and features can be performed securely. Electronic banking is not one of them. "I would not recommend performing commercial business transactions," he advises "or sending confidential information across networks attached to the Internet" (26). In the business world, computer security can be just as easily compromised. More than a third of major U.S. corporations reported doing business over the Internet -- up from 26 percent a year ago -- but a quarter of them say they've suffered attempted break-ins and losses, either in stolen data or cash (Denning 08A). Dr. Gregory E. Shannon, president of InfoStructure Services and Technologies Inc., says the need to improve computer security is essential. There are newly released computer tools intended to help keep the security of your PC information, but which can just as easily be accessed by computer hackers, as this information will be released as freeware (available, and free, to anyone) on the Internet (Cambridge 1). These freely distributed tools could make it far easier for hackers to break into systems. Presently, if a hacker is trying to break into a system, he has to keep probing a network for weaknesses. Before long, hackers will be able to point one of these freeware tools at a network and let it automatically probe for security holes, without any interaction from themselves (Cambridge 1). Hackers, it seems, have no trouble staying ahead of the computer security experts. Online service providers, such as America Online, CompuServe and Prodigy, are effective in providing additional protection for computer information. First of all, you need to use a "secret password" -- a customer ID that is typed in when you log on to the network. Then you can only send information, and retrieve your own e-mail, through your own user access. Sometimes the service itself is even locked out of certain information. CompuServe, for example, with its 800-plus private bulletin boards, can't even read what's on them without gaining prior permission from the company paying for the service (Flanagan 34). Perhaps in an attempt to show how secure they are, these information services will give out very little information about security itself. They all take measures to protect private information, and give frequent warnings to new users about the danger in giving out a password, but there is also danger in making the service easy to use for the general public -- anything that is made easy enough for the novice computer user would not present much of a challenge for a computer hacker. Still, there is a certain amount of protection in using a service provider -- doing so is roughly euqivalent to locking what might be an open door (Flanagan 34). The latest weak spot that has been discovered is a flaw in the World Wide Web. The Web is the fastest-growing zone within the Internet, the area where most home computer users travel, as it's attractive and easy to use. According to an advisory issued on the Internet by a programmer in Germany, there is a "hole" in the software that runs most Web sites (Quittner 44). This entry point will provide an an intruder with access to any and all information, allowing him to do anything the owners of the site can do. Network-security specialist Cheswick points out that most of the Web sites use software that puts them at risk. With more and more home computer uses setting up their own home pages and Web sites, this is just one more way a hacker can gain access to personal information (Quittner 44). Credit bureaus are aware of how financial information can be used or changed by computer hackers, which has a serious impact on their customers. Loans can be made with false information (obtained by the hackers from an unsuspecting computer user's data base); and information can be changed for purposes of deceit, harassment or even blackmail. These occur daily in the financial services industry, and the use of Internet has only complicated how an organization or private individual keeps information private, confidential and, most importantly, correct (Regan 26). Still, there are some measures that can be taken to help protect your information. If you use a virus protection program before downloading any files from the Internet, there is less of a chance a hacker can crack your system. Login passwords should be changed frequently (write it down so you don't forget, but store it in a secure place), and they should never contain words or names that are easily guessed. It may be easier for you to remember your password if you use your son's name, but it's also easier for the hacker to detect it. Passwords should always be strictly private -- never tell anyone else what it is (Regan 26). Evaluate products for their security features before you buy any tool to access the Internet or service providers. Remember, to change the default system password -- the one you are initially given to set up the network on your computer (Regan 26). Finally, and most importantly, it's best to realize that a computer system, regardless of the amount of precaution and protection you take, is never completely protected from outsiders. As protection software becomes more sophisticated, so do the hackers who want to break into your system. It's a good idea not to leave the silver on the dining table when you don't know for sure that a thief can't crawl through your window. Works Cited Cambridge Publishing Inc. "PC Security: Internet Security Tool to Deter Hackers." Cambridge Work-Group, (1995): Jan, pp 1. Denning, Dorothy E. "Privacy takes another hit from new computer rules" USA Today, (1996): Dec 12, pp 08A. Erickson, Jim. "Crime on the Internet A Growing Concern." Seattle Post Intelligencer, (1995): Nov 15, http://technoculture.mira.net.au/hypermail/0032.html Flanagan, Patrick. "Demystifying the information highway." Management Review, (1994): May 1, pp 34. Icove, David; Seger, Karl; VonStorch, William. "Fighting Computer Crime." http://www.pilgrim.umass.edu/pub/security/crime1.html Quittner, Joshua. "Technology Cracks in the Net." Time, (1995): Feb 27, pp 44. Regan, Gregory. "Securely accessing the Internet & the World Wide Web: Good or evil?", Credit World, v. 85, (1996): Oct 1, pp 26. f:\12000 essays\technology & computers (295)\ISDN vs Cable modems.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1.0 Introduction The Internet is a network of networks that interconnects computers around the world, supporting both business and residential users. In 1994, a multimedia Internet application known as the World Wide Web became popular. The higher bandwidth needs of this application have highlighted the limited Internet access speeds available to residential users. Even at 28.8 Kilobits per second (Kbps)-the fastest residential access commonly available at the time of this writing-the transfer of graphical images can be frustratingly slow. This report examines two enhancements to existing residential communications infrastructure: Integrated Services Digital Network (ISDN), and cable television networks upgraded to pass bi-directional digital traffic (Cable Modems). It analyzes the potential of each enhancement to deliver Internet access to residential users. It validates the hypothesis that upgraded cable networks can deliver residential Internet access more cost-effectively, while offering a broader range of services. The research for this report consisted of case studies of two commercial deployments of residential Internet access, each introduced in the spring of 1994: · Continental Cablevision and Performance Systems International (PSI) jointly developed PSICable, an Internet access service deployed over upgraded cable plant in Cambridge, Massachusetts; · Internex, Inc. began selling Internet access over ISDN telephone circuits available from Pacific Bell. Internex's customers are residences and small businesses in the "Silicon Valley" area south of San Francisco, California. 2.0 The Internet When a home is connected to the Internet, residential communications infrastructure serves as the "last mile" of the connection between the home computer and the rest of the computers on the Internet. This section describes the Internet technology involved in that connection. This section does not discuss other aspects of Internet technology in detail; that is well done elsewhere. Rather, it focuses on the services that need to be provided for home computer users to connect to the Internet. 2.1 ISDN and upgraded cable networks will each provide different functionality (e.g. type and speed of access) and cost profiles for Internet connections. It might seem simple enough to figure out which option can provide the needed level of service for the least cost, and declare that option "better." A key problem with this approach is that it is difficult to define exactly the needed level of service for an Internet connection. The requirements depend on the applications being run over the connection, but these applications are constantly changing. As a result, so are the costs of meeting the applications' requirements. Until about twenty years ago, human conversation was by far the dominant application running on the telephone network. The network was consequently optimized to provide the type and quality of service needed for conversation. Telephone traffic engineers measured aggregate statistical conversational patterns and sized telephone networks accordingly. Telephony's well-defined and stable service requirements are reflected in the "3-3-3" rule of thumb relied on by traffic engineers: the average voice call lasts three minutes, the user makes an average of three call attempts during the peak busy hour, and the call travels over a bidirectional 3 KHz channel. In contrast, data communications are far more difficult to characterize. Data transmissions are generated by computer applications. Not only do existing applications change frequently (e.g. because of software upgrades), but entirely new categories-such as Web browsers-come into being quickly, adding different levels and patterns of load to existing networks. Researchers can barely measure these patterns as quickly as they are generated, let alone plan future network capacity based on them. The one generalization that does emerge from studies of both local and wide- area data traffic over the years is that computer traffic is bursty. It does not flow in constant streams; rather, "the level of traffic varies widely over almost any measurement time scale" (Fowler and Leland, 1991). Dynamic bandwidth allocations are therefore preferred for data traffic, since static allocations waste unused resources and limit the flexibility to absorb bursts of traffic. This requirement addresses traffic patterns, but it says nothing about the absolute level of load. How can we evaluate a system when we never know how much capacity is enough? In the personal computing industry, this problem is solved by defining "enough" to be "however much I can afford today," and relying on continuous price-performance improvements in digital technology to increase that level in the near future. Since both of the infrastructure upgrade options rely heavily on digital technology, another criteria for evaluation is the extent to which rapidly advancing technology can be immediately reflected in improved service offerings. Cable networks satisfy these evaluation criteria more effectively than telephone networks because: · Coaxial cable is a higher quality transmission medium than twisted copper wire pairs of the same length. Therefore, fewer wires, and consequently fewer pieces of associated equipment, need to be installed and maintained to provide the same level of aggregate bandwidth to a neighborhood. The result should be cost savings and easier upgrades. · Cable's shared bandwidth approach is more flexible at allocating any particular level of bandwidth among a group of subscribers. Since it does not need to rely as much on forecasts of which subscribers will sign up for the service, the cable architecture can adapt more readily to the actual demand that materializes. · Telephony's dedication of bandwidth to individual customers limits the peak (i.e. burst) data rate that can be provided cost-effectively. In contrast, the dynamic sharing enabled by cable's bus architecture can, if the statistical aggregation properties of neighborhood traffic cooperate, give a customer access to a faster peak data rate than the expected average data rate. 2.2 Why focus on Internet access? Internet access has several desirable properties as an application to consider for exercising residential infrastructure. Internet technology is based on a peer-to-peer model of communications. Internet usage encompasses a wide mix of applications, including low- and high- bandwidth as well as asynchronous and real-time communications. Different Internet applications may create varying degrees of symmetrical (both to and from the home) and asymmetrical traffic flows. Supporting all of these properties poses a challenge for existing residential communications infrastructures. Internet access differs from the future services modeled by other studies described below in that it is a real application today, with growing demand. Aside from creating pragmatic interest in the topic, this factor also makes it possible to perform case studies of real deployments. Finally, the Internet's organization as an "Open Data Network" (in the language of (Computer Science and Telecommunications Board of the National Research Council, 1994)) makes it a service worthy of study from a policy perspective. The Internet culture's expectation of interconnection and cooperation among competing organizations may clash with the monopoly-oriented cultures of traditional infrastructure organizations, exposing policy issues. In addition, the Internet's status as a public data network may make Internet access a service worth encouraging for the public good. Therefore, analysis of costs to provide this service may provide useful input to future policy debates. 3.0 Technologies This chapter reviews the present state and technical evolution of residential cable network infrastructure. It then discusses a topic not covered much in the literature, namely, how this infrastructure can be used to provide Internet access. It concludes with a qualitative evaluation of the advantages and disadvantages of cable-based Internet access. While ISDN is extensively described in the literature, its use as an Internet access medium is less well-documented. This chapter briefly reviews local telephone network technology, including ISDN and future evolutionary technologies. It concludes with a qualitative evaluation of the advantages and disadvantages of ISDN-based Internet access. 3.1 Cable Technology Residential cable TV networks follow the tree and branch architecture. In each community, a head end is installed to receive satellite and traditional over-the-air broadcast television signals. These signals are then carried to subscriber's homes over coaxial cable that runs from the head end throughout the community Figure 3.1: Coaxial cable tree-and-branch topology To achieve geographical coverage of the community, the cables emanating from the head end are split (or "branched") into multiple cables. When the cable is physically split, a portion of the signal power is split off to send down the branch. The signal content, however, is not split: the same set of TV channels reach every subscriber in the community. The network thus follows a logical bus architecture. With this architecture, all channels reach every subscriber all the time, whether or not the subscriber's TV is on. Just as an ordinary television includes a tuner to select the over-the-air channel the viewer wishes to watch, the subscriber's cable equipment includes a tuner to select among all the channels received over the cable. 3.1.1. Technological evolution The development of fiber-optic transmission technology has led cable network developers to shift from the purely coaxial tree-and-branch architecture to an approach referred to as Hybrid Fiber and Coax(HFC) networks. Transmission over fiber-optic cable has two main advantages over coaxial cable: · A wider range of frequencies can be sent over the fiber, increasing the bandwidth available for transmission; · Signals can be transmitted greater distances without amplification. The main disadvantage of fiber is that the optical components required to send and receive data over it are expensive. Because lasers are still too expensive to deploy to each subscriber, network developers have adopted an intermediate Fiber to the Neighborhood (FTTN)approach. Figure 3.3: Fiber to the Neighborhood (FTTN) architecture Various locations along the existing cable are selected as sites for neighborhood nodes. One or more fiber-optic cables are then run from the head end to each neighborhood node. At the head end, the signal is converted from electrical to optical form and transmitted via laser over the fiber. At the neighborhood node, the signal is received via laser, converted back from optical to electronic form, and transmitted to the subscriber over the neighborhood's coaxial tree and branch network. FTTN has proved to be an appealing architecture for telephone companies as well as cable operators. Not only Continental Cablevision and Time Warner, but also Pacific Bell and Southern New England Telephone have announced plans to build FTTN networks. Fiber to the neighborhood is one stage in a longer-range evolution of the cable plant. These longer-term changes are not necessary to provide Internet service today, but they might affect aspects of how Internet service is provided in the future. 3.2 ISDN Technology Unlike cable TV networks, which were built to provide only local redistribution of television programming, telephone networks provide switched, global connectivity: any telephone subscriber can call any other telephone subscriber anywhere else in the world. A call placed from a home travels first to the closest telephone company Central Office (CO) switch. The CO switch routes the call to the destination subscriber, who may be served by the same CO switch, another CO switch in the same local area, or a CO switch reached through a long- distance network. Figure 4.1: The telephone network The portion of the telephone network that connects the subscriber to the closest CO switch is referred to as the local loop. Since all calls enter and exit the network via the local loop, the nature of the local connection directly affects the type of service a user gets from the global telephone network. With a separate pair of wires to serve each subscriber, the local telephone network follows a logical star architecture. Since a Central Office typically serves thousands of subscribers, it would be unwieldy to string wires individually to each home. Instead, the wire pairs are aggregated into groups, the largest of which are feeder cables. At intervals along the feeder portion of the loop, junction boxes are placed. In a junction box, wire pairs from feeder cables are spliced to wire pairs in distribution cables that run into neighborhoods. At each subscriber location, a drop wire pair (or pairs, if the subscriber has more than one line) is spliced into the distribution cable. Since distribution cables are either buried or aerial, they are disruptive and expensive to change. Consequently, a distribution cable usually contains as many wire pairs as a neighborhood might ever need, in advance of actual demand. Implementation of ISDN is hampered by the irregularity of the local loop plant. Referring back to Figure 4.3, it is apparent that loops are of different lengths, depending on the subscriber's distance from the Central Office. ISDN cannot be provided over loops with loading coils or loops longer than 18,000 feet (5.5 km). 4.0 Internet Access This section will outline the contrasts of access via the cable plant with respect to access via the local telephon network. 4.1 Internet Access Via Cable The key question in providing residential Internet access is what kind of network technology to use to connect the customer to the Internet For residential Internet delivered over the cable plant, the answer is broadband LAN technology. This technology allows transmission of digital data over one or more of the 6 MHz channels of a CATV cable. Since video and audio signals can also be transmitted over other channels of the same cable, broadband LAN technology can co-exist with currently existing services. Bandwidth The speed of a cable LAN is described by the bit rate of the modems used to send data over it. As this technology improves, cable LAN speeds may change, but at the time of this writing, cable modems range in speed from 500 Kbps to 10 Mbps, or roughly 17 to 340 times the bit rate of the familiar 28.8 Kbps telephone modem. This speed represents the peak rate at which a subscriber can send and receive data, during the periods of time when the medium is allocated to that subscriber. It does not imply that every subscriber can transfer data at that rate simultaneously. The effective average bandwidth seen by each subscriber depends on how busy the LAN is. Therefore, a cable LAN will appear to provide a variable bandwidth connection to the Internet Full-time connections Cable LAN bandwidth is allocated dynamically to a subscriber only when he has traffic to send. When he is not transferring traffic, he does not consume transmission resources. Consequently, he can always be connected to the Internet Point of Presence without requiring an expensive dedication of transmission resources. 4.2 Internet Access Via Telephone Company In contrast to the shared-bus architecture of a cable LAN, the telephone network requires the residential Internet provider to maintain multiple connection ports in order to serve multiple customers simultaneously. Thus, the residential Internet provider faces problems of multiplexing and concentration of individual subscriber lines very similar to those faced in telephone Central Offices. The point-to-point telephone network gives the residential Internet provider an architecture to work with that is fundamentally different from the cable plant. Instead of multiplexing the use of LAN transmission bandwidth as it is needed, subscribers multiplex the use of dedicated connections to the Internet provider over much longer time intervals. As with ordinary phone calls, subscribers are allocated fixed amounts of bandwidth for the duration of the connection. Each subscriber that succeeds in becoming active (i.e. getting connected to the residential Internet provider instead of getting a busy signal) is guaranteed a particular level of bandwidth until hanging up the call. Bandwidth Although the predictability of this connection-oriented approach is appealing, its major disadvantage is the limited level of bandwidth that can be economically dedicated to each customer. At most, an ISDN line can deliver 144 Kbps to a subscriber, roughly four times the bandwidth available with POTS. This rate is both the average and the peak data rate. A subscriber needing to burst data quickly, for example to transfer a large file or engage in a video conference, may prefer a shared-bandwidth architecture, such as a cable LAN, that allows a higher peak data rate for each individual subscriber. A subscriber who needs a full-time connection requires a dedicated port on a terminal server. This is an expensive waste of resources when the subscriber is connected but not transferring data. 5.0 Cost Cable-based Internet access can provide the same average bandwidth and higher peak bandwidth more economically than ISDN. For example, 500 Kbps Internet access over cable can provide the same average bandwidth and four times the peak bandwidth of ISDN access for less than half the cost per subscriber. In the technology reference model of the case study, the 4 Mbps cable service is targeted at organizations. According to recent benchmarks, the 4 Mbps cable service can provide the same average bandwidth and thirty-two times the peak bandwidth of ISDN for only 20% more cost per subscriber. When this reference model is altered to target 4 Mbps service to individuals instead of organizations, 4 Mbps cable access costs 40% less per subscriber than ISDN. The economy of the cable-based approach is most evident when comparing the per-subscriber cost per bit of peak bandwidth: $0.30 for Individual 4 Mbps, $0.60 for Organizational 4 Mbps, and $2 for the 500 Kbps cable services-versus close to $16 for ISDN. However, the potential penetration of cable- based access is constrained in many cases (especially for the 500 Kbps service) by limited upstream channel bandwidth. While the penetration limits are quite sensitive to several of the input parameter assumptions, the cost per subscriber is surprisingly less so. Because the models break down the costs of each approach into their separate components, they also provide insight into the match between what follows naturally from the technology and how existing business entities are organized. For example, the models show that subscriber equipment is the most significant component of average cost. When subscribers are willing to pay for their own equipment, the access provider's capital costs are low. This business model has been successfully adopted by Internex, but it is foreign to the cable industry. As the concluding chapter discusses, the resulting closed market structure for cable subscriber equipment has not been as effective as the open market for ISDN equipment at fostering the development of needed technology. In addition, commercial development of both cable and ISDN Internet access has been hindered by monopoly control of the needed infrastructure-whether manifest as high ISDN tariffs or simple lack of interest from cable operators. f:\12000 essays\technology & computers (295)\ITT Trip Scheduling.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ITT Trip Scheduling The Information, Tours and Tickets (ITT) office could use a system to assist them in creating trip schedules. In this paper I will outline a plan for a Decision Support System (DSS) that will assist ITT in creating schedules for their tours. This system will also track customer surveys and hold data about all of ITTs trips. They already have some computer systems, a spread sheet program and a data base management system (DBMS) which can all be used to build a small DSS. Using the DBMS and the spread sheet software I have designed a system to assist them in making decisions about scheduling trips. This system also allows them access to information about all of ITTs trips and the feedback from customers about each trip. In the next few paragraphs I go through the major steps in developing a system of this nature. A system of this type goes through several phases in its development. I start with the planning phase and go on to discuss research, analysis and conceptual design. Then I talk a little about the models used in this system. I finish by talking about the actual design, the construction, and the implementation of the new ITT system. I finish the paper with a discussion of maintaining the system. The first step in building any DSS is planning. Planning is basically defining the problem. The planning also involves an assessment of exactly what is needed. In this case I deal with trip scheduling. In the case description this would include: How many trips to offer, the days of the week to have particular trips, and when to cancel trips. Obviously the scheduling ties to other information such as profit and participation, but for this paper I will only cover the scheduling portion of ITTs problem. Therefore I have defined the problem as a basic scheduling problem. I see a need for ITT to better schedule trips using the information they have and using the information they collect from customer surveys. With the problem defined we can now look at what information is needed to further analyze the problem. After a problem is defined, information must be collected. The research phase of system development is just that, collecting information. The information collected will be used in the next phase of development to further analyze the problem and it will be used in this case to build the databases. The databases will then be used with decision support system (DSS) models to assist ITT in making scheduling decisions. Information in this case can come from their current schedules and trip description fliers. Also during this stage of development the current resources are assessed. This would include ITTs current information systems and their current budget. And, information such as Navy or ITT policies are collected as a reference. Once all the information is collected than the system can move to the next stage of development, analysis. With all the data and information collected analysis of it begins. In this stage we determine what needs to be done to solve the problem. No work, on a new system, is started yet, but a system is conceptualised and possible solutions are identified. Also in this stage a final solution to the problem is chosen and system passes through another stage of development. For the ITT problem I have chosen a simple Management Information System (MIS) with small decision support models to aid in creating schedules. This system will provide ITT with the information they need to make decisions about scheduling their trips as well as allow them to create the schedules directly from computer models. I will discuss the models in the next paragraph. The system would not draw conclusions, but simply show the pros and cons to certain choices. The MIS portion of the system will simply provide information to the users and to the DSS. The DSS portion of the system will allow a schedule to be created using resources in an optimum manner. I decided to go with a small and simple system because of ITTs limited resources and because of a high employee turn-over. A complicated system would not be feasible in such an environment where new employees are constantly having to be trained to use it. In this paragraph I side step from the development a little to talk about the models used in the system. As stated earlier the models used in this system should be kept simple and small if possible. Using standard spread sheet software, models can be created that will show the optimal schedule for trips. The basic information required for these models should include bus schedules, reservation requirements, customer satisfaction information, and cost data. Other data can also be added to assist in decisions. The models would first approximate the participation for each proposed trip. Then another model would determine if the trip is feasible given the costs involved. The next model would determine if the trip is even possible considering what is required as far as reservations and transportation. Another model could also determine if the trip would be able satisfy the customers given the past customer inputs. Finally after determining whether each trip is worth offering the ITT employees could use the computer to generate a new schedule. Now that we know what the system should do we can turn our attention to the design of the system. In the design phase of DSS development the new system is designed to solve the problem. Here the information collected and the resources identified are examined to decide exactly what must be done and in what manner to solve the problem. Diagrams may be drawn to show how the components will fit together. Also the ground work is laid for the construction phase. In the ITT case I designed a database that contains information on all their trips along with information obtained from the customer surveys. This information is then combined with bus and reservation data in a standard spread sheet where it is manipulated to optimize the trip schedule. A manager or an employee can then use the information and the data in the database to create a calendar of events using an inexpensive program 'Calendar Creator.' Which is what ITT currently uses to create schedules manually. The construction of a system is the bringing together of all the required parts and making the system do what it's supposed to do. In this case the system I have designed will only require a minimum of additional resources. I designed the system to work on their existing computers using their existing software. The databases and the spreadsheet models could be built by knowledgeable employees with minimal outside help. Once constructed the system could be run and results compared with the old system to determine if it is functioning properly. The results could also tell if the system is optimizing the schedule or just speeding up what is already done manually. Implementing any system is the process of putting it into use. In this project the implementation phase should be a fairly easy conversion. The old way of manually deciding on trips and putting the results into Calendar Creator is simply replaced with an automated selection of trips that the employee can use to create a calendar. When the system is operating normally it should improve the way ITT does business. After implementing this system it will have to be maintained. New models will have to be added and old models will have to be changed or removed. With the simple models used in this system that should not be difficult. The hardest component in this system to maintain will be the databases. They will have to contain the most current data in order for the system to operate properly. I suggest a data checking module be added at some point in order to maintain data consistency. Inconsistent data is something that can degrade the system performance and cause it to give inaccurate or incorrect information. A data checking module will insure that the information entered into the system is accurate and consistent with the rest of the system. There could be many solutions to this problem, but given the limited budget most are not feasible. The system I have designed should be more than sufficient to assist them in creating schedules faster and more efficiently. Also it will give customers more of what they want and should improve repeat business. f:\12000 essays\technology & computers (295)\Journalism on the Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Journalism on the Internet The common forms of media in today's world each have both advantages and disadvantages. The Internet has been around for an almost equal amount of time as most of them, but only recently has it become a popular way of retrieving information. The Internet takes the best of all other medium and combines them into a very unique form. The Internet is the best way to retrieve information. This combination of paper publishing, TV, radio, telephones, and mail is the future of communications. The internet has several types of journalism which can be defined into three sections. One section is online magazines, online broadcasting, and other online services. The next group is resource files and web pages. The third is discussion groups/forums and e-mail. I will investigate these areas of the net, showing the advantages and disadvantages of each in comparison to the conventional forms. In order to understand what all these topics are you must first understand what the internet is. The simple answer is that it is computers all over the globe connected together by telephone wires. It was first made by the military, "No one owns the Internet", to have a network with no centre. That way it could never be destroyed by nuclear war. Since then, universities have used it and it has evolved into what it is today. It is a library that contains mail, stories, news advertising, and just about everything else. "In a sense, freenets are a literacy movement for computer mediated communication today, as public libraries were to reading for an earlier generation." Now that the term "the net" is understood lets look at some sections of the net. An online magazine is a computer that lets users access it through the net. This computer stores one or more magazines which users can read. "PC magazine and other magazines are available on the Web" "Maclean's Magazine and Canadian Business online; and Reuters' Canadian Newsclips." This form is much better that conventional publishing, "we are using the online service to enhance the print magazine", for several reasons. It is environmentally safe, "Publish without Paper", most are free, "$50 a month on CompuServe", you can get any article from any year at the touch of a button, and you can search for key words. "Search engines make it easy pinpointing just the information you need". The articles don't have space limits so you will get a specially edited full story version (depending on the reporter) and other articles that didn't make the print. It is easy to compare the story with another journalists view, or get the story from a journalist from another country. This way, the reader can make informed decisions on anything, without bias. A few people complain that there is too much information to receive, "mass jumble", but there are filter programs that will cut the information to any set amount. CNN online is a broadcast web page (another computer). CNN not only has the articles to read but video, and sound clips too. Anyone can get up to the minute news, and reports. "We will send a reporter to the game, who will interview people like the coach and uplink the story while the game is being played." This is an excellent addition to TV. It is a mix of TV and publishing. TV has a schedule to keep and might cut out parts simply for time but there is no time limit online. Also, because it is interactive, users will remember the information longer than if they watched TV. An online service is a web page that sells something. It is easy to order anything, from flowers to even airline tickets. "...opportunity to buy tickets through TicketMaster." But even this has problems, "the Internet is new and many possible types of fraud must be dealt with," but the solution is software, "Secure Courier...a secure means of transferring financial transactions". This service is the home shopping, catalogue, and printed flier replacement. Their advantage is that you can buy directly, or skip them if you wish, unlike TV. Web pages on the internet are computers that are dedicated to letting people access them. Many companies have a web page that offers help to customers, news, services, product updates, advice from experts, even "information on elections, government programs, and so forth." "These new, online services include daily industry news, classified, a directory of suppliers, an interactive forum, and tons of reference material, including government documents, surveys, speeches, papers, and statistics." Even home businesses can have a page and advertise their products or services. The only other medium that comes close to what a web page can do is the help telephone lines, but a web page is much more useful. Resource files are like a library of information. By using a search program a user can find files on any topic. They can get, digital books, reports, pictures, statistics, university essays, sound files, video, and even programs, "You can even download the federal budget simulator". However, there is always going to be the possibility of false information, but because it is so easy to speak your mind on the net, this bad information is quickly found and deleted. "Established sources such as universities, libraries, and government agencies can be considered reasonably reliable....Then comes the free-for-all." "You must be a critical viewer of both the source and the content" The final area is discussion groups or forums. There is a forum for just about any topic. "The overall advantage is the spread of ideas, information, and thoughts between people who would not otherwise correspond. The Result is a free flow if ideas with little moderation or control". A forum is a mail group that allow people all over the world discuss a topic, trade information, etc. "everything from uploaded works by Canadian artists to chats on hockey and politics." Each forum has many users, each with their own point of view. Anyone can talk, bias or not, loving or hating the topic. "There are no rules about what can or can not go on the Internet. Legal standards are almost impossible to establish and even less likely to be enforced on a global link,". However, this free flow of information can cause problems. These are evident in adult forums and the EFF. The Electronic Freedom Foundation is a group of people that want all information to be available to anyone. This information can be anything such as; how to build a car bombs, atomic bombs, working computer virus code, government files, UFO info, hacking, cracking (copying software), and pheaking (free telephone calls). This information is illegal in some countries, and can be harmful or fatal if used. It is still available because of the freedom of information act. The information has always been available, but only lately has it become this easy to get. Adult forums and web pages have created a stir in the government. There are explicit pictures, novels, catalog, stories, mail, and even child porn on the net. The government has set out to stop the child porn but allowed the other adult material to pass by. It would be improper for a young child to access this information. To stop this, parents can install programs to lock out these web pages, but a knowledgeable child can still get access to them. The government is currently working on this problem and setting up laws to protect the people who want to be protected, while not infringing on the rights of the people who want access to this information. As you can see, the Internet has the potential to be the worlds #1 medium. With the ever expanding Web and a growing number of users, this is only a matter of time. Journalism on the Internet is only one of many things that will be available through the net. As these technologies advance, barriers will be broken, rules set, and the world's knowledge will be a phone call and a mouse click away. Footnotes in Order Bill Kempthorne, "Internet, So What?", The Computer Paper, September, (1995), p. 20 Trueman, "The 1995 Canadian Internet Awards", The Computer Paper, September, (1995), p. 94 Michael J. Miller, "Where Do I Want to Go Today", PC Magazine, March 28, (1995), P. 75 Sorelle Saidman, "Online Canadian Content Expanding despite Prodigy Setback", Toronto Computes, November, (1995), p. 9 Doug Bennet, "Confessions of an online publisher", Toronto Computes, November (1995), p. 35 "The Internet Comes of Age" PC Magazine, May 30, (1995), P. 19 Casey Abell, "Letters", PC Magazine, May 30, (1995), P. 19 Rick Ayre and Don Willmott, "The Internet Means Business", Pc Magazine, May 16, (1995), p. 197 Bill Kempthorne, "Internet, So What?", The Computer Paper, September, (1995), p. 20 Chris Carder, "Sports on the Internet a winner", Toronto Computes, November, (1995), P. 98 Chris Carder, "Sports on the Internet a winner", Toronto Computes, November, (1995), P. 98 Patrick McKenna, "Netscape's Digital Envelope For Internet Transactions", The Computer Paper, September, (1995), p. 90 Patrick McKenna, "Netscape's Digital Envelope For Internet Transactions", The Computer Paper, September, (1995), p. 90 Michael J. Miller, "Where Do I Want to Go Today", PC Magazine, March 28, (1995), P. 75 Doug Bennet, "Confessions of an online publisher", Toronto Computes, November (1995), p. 37 Michael J. Miller, "Where Do I Want to Go Today", PC Magazine, March 28, (1995), P. 75 Bill Kempthorne, "Internet, So What?", The Computer Paper, September, (1995), p. 21 Bill Kempthorne, "Internet, So What?", The Computer Paper, September, (1995), p. 21 Bill Kempthorne, "Internet, So What?", The Computer Paper, September, (1995), p. 21 Sorelle Saidman, "Online Canadian Content Expanding despite Prodigy Setback", Toronto Computes, November, (1995), p. 9 Bill Kempthorne, "Internet, So What?", The Computer Paper, September, (1995), p. 22 f:\12000 essays\technology & computers (295)\Lasers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ LASERS The laser is a device that a beam of light that is both scientifically and practically of great use because it is coherent light. The beam is produced by a process known as stimulated emission, and the word "laser" is an acronym for the phrase "light amplification by stimulated emission of radiation." Light is just like radio waves in the way that it can also carry information. The information is encoded in the beam as variations in the frequency or shape of the light wave. The good part is that since light waves have much higher frequencies they can also hold much more information. Not only is the particle the smallest light unit but it is a particle as well as a wave. In beams of light whether they are ordinary natural or artificial the photon waves will not be traveling together because they are not being emitted at exactly the same moment but instead at random short bursts. Even if the light is of a single frequency that statement would also be true. A laser is useful because it produces light that is not only of essentially a single frequency but also coherent, with the light waves all moving along in unison. Lasers consist of several components. A few of the many things that the so-called active medium might consist of are, atoms of a gas, molecules in a liquid, and ions in a crystal. Another component consists of some method of introducing energy into the active medium, such as a flash lamp for example. Another component is the pair of mirrors on either side of the active medium which consists of one that transmits some of the radiation that hits it. If the active component in the laser is a gas laser than each atom is characterized by a set of energy states, or energy levels, of which it may consist. An example of the energy states could be pictured as a unevenly spaced ladder which the higher rungs mean higher states of energy and the lower rungs mean lower states of energy. If left disturbed for a long time the atom will reach its ground state or lowest state of energy. According to quantum mechanics there is only one light frequency that the atom will work with. There are three ways that the atom can deal with the presence of light either it can absorb the light, or spontaneous emission occurs, or stimulated emission occurs. This means that if the atom is at its lowest state that it may absorb the light and jump to its high state and emit extra light while doing so. The second thing it may do is if it is at its highest state it can fall spontaneously to its lower state thus emitting light. The third way is that the atom will jump from its upper state to its lower state thus emitting extra light. Spontaneous emission is not effected by light yet it is rather on a time scale characteristic of the states involved. That is called the spontaneous lifetime. In stimulated emission the frequency of the light is the same as the frequency of the light that stimulated it. Carbon-monoxide, color center, excimer, free-electron, gas-dynamic, helium-cadmium, hydrogen-fluoride, deuterium-fluoride, iodine, Raman spin-flip, and rare-gas halide lasers are just a few of the many types of lasers there are out there in the world. The helium-neon laser is the most common and by far the cheapest costing about $170. The diode laser is the smallest being packed in a transistor like package. The dye laser are very good for their broad, continuously variable wavelength capabilities. The theory of stimulated emission was first proved by Albert Einstein in 1916, then population inverse was discussed by V. A. Fabrikant in 1940. This led to the building of the first ammonia maser in 1954 by J. P. Gordon, H. J. Zeiger, and Charles H. Townes. In July of 1960 Theodore H. Maiman announced the generation of a pulse of coherent red light by means of a red crystal- the first laser. In 1987 Gordon Gould won a patent he had been trying to get for three years to build the first gas-discharged laser which he had conceived in 1957. In that same patent the helium-neon was included. Bibliography: Bertolotti, M., Masers and lasers: An Historical Approach (1983); Kasuya, T., and Tsukakoshi, M., Handbook of Laser Science and Technology (1988); Meyers,Robert, ed., Encyclopedia of Lasers, 3d ed. (1989); Steen, W. M., ed., Lasers in Manufacturing (1989); Whimmery, J. R., ed., Lasers: Invention to Application (1987); Young, M., Optics and Lasers, 3d rev. ed. (1986). LASERS The laser is a device that a beam of light that is both scientifically and practically of great use because it is coherent light. The beam is produced by a process known as stimulated emission, and the word "laser" is an acronym for the phrase "light amplification by stimulated emission of radiation." Light is just like radio waves in the way that it can also carry information. The information is encoded in the beam as variations in the frequency or shape of the light wave. The good part is that since light waves have much higher frequencies they can also hold much more information. Not only is the particle the smallest light unit but it is a particle as well as a wave. In beams of light whether they are ordinary natural or artificial the photon waves will not be traveling together because they are not being emitted at exactly the same moment but instead at random short bursts. Even if the light is of a single frequency that statement would also be true. A laser is useful because it produces light that is not only of essentially a single frequency but also coherent, with the light waves all moving along in unison. Lasers consist of several components. A few of the many things that the so-called active medium might consist of are, atoms of a gas, molecules in a liquid, and ions in a crystal. Another component consists of some method of introducing energy into the active medium, such as a flash lamp for example. Another component is the pair of mirrors on either side of the active medium which consists of one that transmits some of the radiation that hits it. If the active component in the laser is a gas laser than each atom is characterized by a set of energy states, or energy levels, of which it may consist. An example of the energy states could be pictured as a unevenly spaced ladder which the higher rungs mean higher states of energy and the lower rungs mean lower states of energy. If left disturbed for a long time the atom will reach its ground state or lowest state of energy. According to quantum mechanics there is only one light frequency that the atom will work with. There are three ways that the atom can deal with the presence of light either it can absorb the light, or spontaneous emission occurs, or stimulated emission occurs. This means that if the atom is at its lowest state that it may absorb the light and jump to its high state and emit extra light while doing so. The second thing it may do is if it is at its highest state it can fall spontaneously to its lower state thus emitting light. The third way is that the atom will jump from its upper state to its lower state thus emitting extra light. Spontaneous emission is not effected by light yet it is rather on a time scale characteristic of the states involved. That is called the spontaneous lifetime. In stimulated emission the frequency of the light is the same as the frequency of the light that stimulated it. Carbon-monoxide, color center, excimer, free-electron, gas-dynamic, helium-cadmium, hydrogen-fluoride, deuterium-fluoride, iodine, Raman spin-flip, and rare-gas halide lasers are just a few of the many types of lasers there are out there in the world. The helium-neon laser is the most common and by far the cheapest costing about $170. The diode laser is the smallest being packed in a transistor like package. The dye laser are very good for their broad, continuously variable wavelength capabilities. The theory of stimulated emission was first proved by Albert Einstein in 1916, then population inverse was discussed by V. A. Fabrikant in 1940. This led to the building of the first ammonia maser in 1954 by J. P. Gordon, H. J. Zeiger, and Charles H. Townes. In July of 1960 Theodore H. Maiman announced the generation of a pulse of coherent red light by means of a red crystal- the first laser. In 1987 Gordon Gould won a patent he had been trying to get for three years to build the first gas-discharged laser which he had conceived in 1957. In that same patent the helium-neon was included. Bibliography: Bertolotti, M., Masers and lasers: An Historical Approach (1983); Kasuya, T., and Tsukakoshi, M., Handbook of Laser Science and Technology (1988); Meyers,Robert, ed., Encyclopedia of Lasers, 3d ed. (1989); Steen, W. M., ed., Lasers in Manufacturing (1989); Whimmery, J. R., ed., Lasers: Invention to Application (1987); Young, M., Optics and Lasers, 3d rev. ed. (1986). f:\12000 essays\technology & computers (295)\Macintosh Rules.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ What computer is the fastest? What computer is the easiest to use? What computer is number one in education, and multimedia? That's right, the Macintosh line of computers. A strong competitor in the realm of computing for a number of years, the Macintosh is still going strong. The reasons are apparent, and numerous. For starters, who wants a computer with no power?Macintosh sure doesn't! Independent tests prove that today's Power Macintosh computers, based on the PowerPC processor, outperform comparable machines based on the Intel Pentium processor. In a benchmark test, conducted in June 1995, using 10 applications available for both Macintosh, and Windows 3.1 systems, the 120-megahertz Power Macintosh 9500/120 was, on average, 51 percent faster than a 120-megahertz Pentium processor based PC. The 132-megahertz Power Macintosh 9500/132 was 80 percent faster when running scientific and engineering applications, and 102 percent faster when running graphics and publishing applications. You can understand why the education market is almost entirely apple based. Recent surveys confirm that from kindergarten through college, Apple has cornered the market in education, and remains number one in this U.S. market. Apple Macintosh computers account for 60% of the 5.9 million machines in U.S. schools for the 1995-96 school year. Only 29% of schools use the Microsoft/Intel platform, and DOS only accounts for a measly 11%. Also it was reported that 18.4% of 4 year college students own the Macintosh. 55% of college students own a computer, and Apple's in the lead for that market too! The reason Apple says for this continued success is the Mac's ease of use. There is no doubt that the Macintosh is the easiest computer around. The scrolling menu bar is the first example. If a Macintosh menu is too long to fit on the screen, you can scroll down to see all of the items. Windows 95 menus, by contrast, don't scroll up or down. So if you put too many items into the Windows 95 Start button, some will remain out of reach, permanently! Windows 95 hierarchical menus can become confusing as they become more crowded. When you install many applications onto a PC, so they form two columns from the Start Programs menu, the menus may not be able to flow well together. You'll have to jump quickly across from menu list to menu list, which can be difficult to do. The second example I site is the better integration of hardware and software. Because Apple makes both the hardware and the operating system, the two work together easily; when a change is made at the hardware level, the software automatically recognizes it and acts accordingly. In the PC world, Microsoft develops Windows 95 and many different manufacturers make the hardware systems. So, the software and hardware don't always work well together. Here are a few areas that the Macintosh is particularly strong in concerning compatability, floppy disks, memory management, monitor support, mouse support, adding peripherals, connecting to a network, and internet access and publishing. And the last example I'll show, is the ease of adding new resources. When you add capabilities to your Macintosh, it seems to anticipate what you're doing, and even try to help. For example, to add fonts or desk accessories to the Macintosh, all you have to do is drag them to the System Folder. The Mac OS, or operating system, places all of the items where they need to go, automatically. Here are the steps for Windows 95: 1.Double-click on the C: drive in "My Computer." 2.Open the Windows folder. 3.Open the Fonts folder. 4.Click Install New Font in the File menu. 5.Click the drive and the folder that contain the font you want to add. 6.Double-click the name of the font you want to add. As anyone can plainly see, the the choice is obvious and the Mac's the best! Multimedia is an exploding business throughout movies, advertising, and graphic design. Most multimedia developers create their applications on a Macintosh. According to one research company, Apple's Macintosh is the leading development platform for multimedia CD-ROM titles by a 72% to a 28% margin. As a recent article in the San Francisco Examiner puts it, "Walk into any newsroom, desktop publishing center, design studio, or online service office, and nine times out of 10 you will see a wall of Macs." That's quite a statement! There are definite reasons for this too. Installing and using CD-ROM titles is easier with Macintosh computers than with PCs running Windows 95. Today's PCs have multiple standards for sound and graphics, and each standard and each piece of hardware requires a different software driver. As a result, PC owners have problems matching the hardware and software in their systems to the hardware and software requirements of different CD-ROM titles, and different titles can run much differently. In contrast, CD-ROM titles for Macintosh are easier to install and use. Macintosh computers have a single, built-in standard for sound and graphics, so no special drivers are required. And Macintosh was the first home computer to include built-in MPEG hardware playback for full-screen, full-motion video. Apple's Power Macintosh 7500/100, and 8500/120 computers include nearly everything a user needs to quickly and easily begin videoconferencing. QuickTime Conferencing software, high-speed communications capability, and video/sound input are all included. Users need only connect a video camera to the Macintosh video-in connector. With Apple's QuickTime Conferencing software, users can call other videoconference participants over their existing local area networks. Users can see multiple participants at once, take snapshots during sessions, record sessions, and work together on a shared document. Compare this simplicity and power with videoconferencing products in the Windows 95 world, where users must still purchase expensive add-on cards, and software totaling $1,400 or more, and then deal with the complexities of integrating the hardware and software themselves. Speech integration with computers is the wave of the future, and guess who's got the jump in that department. With PlainTalk, you can open any Macintosh document or application by speaking its name. Just move an alias of the item into the Speakable Items folder, and the built-in PlainTalk and Speakable Items technologies take care of the rest. For example, a user who wants to check her stock portfolio without opening several folders and launching an application can just say "check stocks," and the Macintosh will execute the necessary commands. Speakable items can also be AppleScript files, so users can execute an almost unlimited series of actions--including copying files, cleaning up the desktop, and so on, simply by speaking a command. In conclusion, the Macintosh is the computer that can do it all. Handling business tasks, creating breath taking multimedia, and lots, lots more, all at the fastest speed available. It is no wonder Apple has made such a name forf itself, and will likely be in the market for a long time to come. f:\12000 essays\technology & computers (295)\Macintosh vs IBM.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The IBM and Macintosh computers have been in competition with each other for years, and each of them have their strong points. They both had their own ideas about where they should go in the personal computer market. They also had many developments, which propelled themselves over the other. It all started when Thomas John Watson became president of Computing Tabulating Recording in 1914, and in 1924 he renamed it to International Business Machines Corporation. He eventually widened the company lines to include electronic computers, which was extremely new in those days. In 1975 IBM introduced their first personal computer (PC) which was called the Model 5100. It carried a price tag of about $9,000 which caused it to be out of the main stream of personal computers, even though their first computer did not get off to as big as a start they had hoped it did not stop them from continuing on. Later on IBM teamed up with Microsoft to create an operating system to run their new computers, because their software division was not able to meet a deadline. They also teamed up with Intel to supply its chips for the first IBM personal computer. When the personal computer hit the market it was a major hit and IBM became a strong power in electronic computers. Phoenix Technologies went through published documentation to figure out the internal operating system (BIOS) in the IBM. In turn, they designed a BIOS of their own which could be used with IBM computers. It stood up in courts and now with a non IBM BIOS, the clone was created. Many manufacturers jumped in and started making their own IBM Compatible computers, and IBM eventually lost a big share in the desktop computers. While IBM was just getting started in the personal computer market, Apple was also just getting on its feet. It was founded by Steve Jobs and Steve Wozniak in 1976. They were both college drop outs, Steve Jobs out of Reed College in Oregon and Steve Wozniak from the University of Colorado. They ended up in Silicon Valley, which is located in northern California near San Francisco. Wozniak was the person with the brains and Jobs was the one who put it all together. For about $700 someone could buy a computer that they put together, which was called the Apple I. They hired a multimillionaire, Armas Clifford Markkula, a 33 year old as the chief executive in 1977. In the mean time Wozniak was working at Hewlett Packard until Markkula encouraged him to quit his job with them, and to focus his attention on Apple. Apple went public in 1977, for about $25 a share. In 1977 the Apple II was introduced which set the standard for many of the microcomputers to follow, including the IBM PC. The Macintosh and IBM computer have been in competition ever since they put out their first personal computers. In 1980, the personal computer world was dominated by two types of computer systems. One was the Apple II, which had a huge group of loyal users, and they also had a large group of people developing software for the Apple II. The other system was the IBM- Compatible, which for the most part all used the same software and plug in hardware. In 1983 Apple sold over $1 billion in computers and hardware. Now Apple was trying to appeal more to the business world so they designed the Lisa computer that was a prototype for the Macintosh and it cost around $10,000. It featured a never before seen graphical interface and the mouse, which are as common as any other component on the computer today. IBM introduced a spreadsheet program called Lotus 1-2-3, which caused anticipated sales of the Lisa computer to drop to nearly half. In order for Apple to compete with the IBM-Compatible they had to change some things around. Jobs headed the development of the Macintosh, with the goal in mind of a "computer for the rest of us." He wanted it to be easily set up out of the box and up in running in 15 minutes. The developers of the Macintosh made it so that you could not upgrade it for they did not think that you needed to open your computer. In 1984, they launched the Macintosh for $2,495. The advertisements for it cost around $500,000 and more than $1.5 million to play it on Super Bowl Sunday in 1984. They decided later that if they wanted to keep up with IBM they would have to make the Macintosh cheaper and easier to upgrade in order to appeal to the business market. In 1991 Apple's desktop computing business was going down hill, and Motorola, who was their chip manufacturer, was being known as the company that was always one step behind Intel. So Apple lost developers for their personal computer. This is the label on many of the current chips that are being shipped today. One thing that is different between the IBM and Macintosh is the type of CPU architecture they are using. The IBM computers have been using the same chip design as it did when it first created the personal computer. They created their systems around a CPU design Intel created, which used an architecture called CISC (Complex Instruction Set Computing). This also allowed the IBM computer to be compatible through out the years with the older systems. For instance if you had some sort of typing programming that was on an IBM-Compatible computer that had a 286-12 CPU, you could run that same exact software on one of your newest Pentiums today. So even after 10 years the same software could be used. This also has it down sides, because that means we have been using an internal CPU architecture that is at least 20 years old. One thing that IBM users can look forward to is the advancements that Intel is making with it's CPUs. One of the latest things that has hit the market is MMX, which allows programs that are more graphically inclined to run faster, as well as programs that use sound. They already have chips in the making going by the code name Klamath. These will be a cross form of the current Pentium Pro chips and the Pentium MMX chips. They should be coming out in 1998, and will have a MHz rating up to 400. Right now the MMX chips are shipping at 200 MHz and will soon have one at 233 MHz. Intel is moving very swiftly in bringing us the top of the line technology. Apple decided to go with a different CPU architecture. IBM created a RISC (Reduced Instruction Set Computing) CPU that could run faster than the CISC model of the same MHz rating, so a RISC chip with a MHz rating of 100 could run just as fast as a CISC chip with MHz rating of 133. Now with the definitions of CISC and RISC you would think that the RISC chip has fewer instructions, and actually in fact it is just the opposite, but since it started out with fewer instructions then the CISC chip it kept that name. Now IBM did not want to put it into their own personal computers because of the compatibility issues. The computer would not be able to use the current hardware or software, that was being made for the IBM-Compatible computers. So IBM sought out a company that would be willing to buy their RISC chip, and Apple was the company they found. Motorola had previously been designing the chips for Apple, but they were not as fast as IBM so the Macintosh development slowed down in comparison to IBM. IBM could design RISC chips for Apple with no problem. With this Apple needed to get developers to make applications made to run specifically for the RISC chip. IBM decided to team up with Motorola because they were not equipped to put out chips in high volume like Apple needed. Apple had already been creating a mother board based on the Motorola chip design, so with IBM and Motorola teaming up they did not have to redesign their mother boards. So now an Apple computer could run faster than an IBM, in a certain sense. A Macintosh Quadra 40 MHz using Motorola 68040 chip would be faster than most 486DX-66 MHz CPUs. The reason being is that the Macintosh computer was totally design to run with each other. So the Operating System in the Macintosh would take advantage of the hardware's capabilities as well as the hardware taking advantage of the Operating System. So with this interconnected system it would be faster than a system not made to take advantage of every little thing in a piece of hardware. Apple Macintosh Mouse With the both companies in heated competition, the pressure was on for them to come out with things that the other did not have. Apple came through very strongly in this area. They created many devices that are used in many computers today. In 1984 Apple created the first GUI (Graphical User Interface) this also brought about folders or directories, long file names, drag and drop, and the trash can. All these devices are used in the more popular operating system for the IBM-Compatible computer called Windows 95. Apple also created the mouse, which is as common as the keyboard. One thing that helps the IBM-Compatible in the hardware area, is all the third party developers. With the Apple computer, only Apple had the rights to develop hardware for their computers. With IBM-Compatibles anyone can develop hardware for it, thus we have many innovative accessories and hardware for the IBM-compatibles. One of the more interesting devices for the IBM-compatible computers, that was featured at the 1997 Comdex show in Vegas was a speaker system. It looks like a giant plastic dome that is placed above your head pointing down towards you, and allows stereo sound to be heard only by the person directly underneath it. One company that was showing it in action was Creative Labs, which is a maker of Sound Cards and usually sets the standard for them. They had many computers networked together and were running a popular game of 1996 called Quake, which is a first person action game. They had put the dome shaped speakers above each computer station and it allowed each player to hear what was going on around them, but it would not make any outside noise or interfere with the person playing right next to them. Installing a card can be very easy One of the latest things with computers these days is Plug 'n' Play. It was meant to alleviate the fear of people upgrading their computer themselves, even though some people will always pay someone big time money to do it. If you are afraid of opening your computer it is strongly suggested that you have a professional do it, for they have been doing that sort of thing for years, and they know exactly what they are doing as well as what to do if they encounter any problems that are uncommon to the regular consumer. The deal with Plug 'n' Play is that it would allow you to install a new sound card or some other plug in card and then just turn on your computer with out you having to change any jumpers or configure it in any way. The Macintosh computer and the Windows 95 operating system both have this feature built into it as well as some of the newer IBM-Compatible BIOS. There have been draw backs to it, for some of the people that prefer to configure it themselves for the software used to configure the card might not be able to use a configuration you wish to use. Apple computers have many things that already come with it, that the IBM-Compatibles do not always have. For instance they come with a 16-bit sound card, that has voice recognition built into it. With the voice recognition the operating system was designed to use it in every way you could think of, you could do anything without typing or clicking on a thing. For instance you could tell it to "Shut Down" and it will go through and turn off the computer, or you could write a letter to a long lost relative just by speaking. The Macintosh computer was designed so that everything you did was made as easy as possible, so that is why all the software has to be redone when they add new hardware. If you wanted to eject a disk you stuck into it, you went up into the pull down menus and told it to "eject disk." You could also shut off the computer from the pull down menus. This is basically the total opposite of the IBM-Compatible computers. To eject the disk you just plainly press the little button on the disk drive, and if you wanted to turn off the computer you just press the power button. The Macintosh computer could run into problems, say if you had a disk in there and somehow the computer locked up or the power was off, you would not be able to get that disk out of there. Some of the other things that the latest Macintosh computers have been coming with are networking cards built into it already. If you wanted to play a game or transfer files with a friend, you just grabbed a cord and plugged the two computers together and then you are off. You could also do video conferencing and send email over the network, as well. With the way the Macintosh computer was designed you cannot upgrade the sound card for everything is built into the system, but with an IBM-Compatible computer you could easily take out one card and put in another. Anything that you add on to the Macintosh has to be put on the outside, like CD-ROMs and Modems. Also because the Operating System of the Macintosh relies on the computer's hardware and was designed for that particular hardware, if you ever upgrade it you have to upgrade the operating system as well as many hardware components and software that were made for that particular model. That is one reason many of the big time business users would not want to buy a Macintosh for they would want their investment to last awhile and if they needed to they would want to upgrade their systems as cheaply as possible and the IBM-Compatible made it cheap for them to do so. The Macintosh computer itself usually costs about two times as much as a comparable IBM computer. They also tend to confuse their customers by bringing out many new models out all the time. For instance in 1993 alone, Apple introduced 17 different models of their Macintosh computer. Software for the Apple computers is harder to come by then for the IBM-compatible computer. Apple controls all the software for their computers and will not license it to any other developer. So you do not have the variety you do with the IBM computers. A big thing that has become very popular in the last few years is something called the Internet. Almost everyone has experienced the internet in some form or the other. You can almost do anything you wanted over the internet. From writing a message to some distant relative and have it arrive to that person in minutes, or playing a chess game with someone from Russia. You can also get any program you are looking for over the internet, and many of these programs are usually only for the IBM-compatible computer for there is more people with an IBM computer and thus more people making applications and games for the IBM computer. So basically there is just a ton of software out there for people who own an IBM-compatible computer. With the IBM-compatible computer you can continue to upgrade it, even someone who bought a computer five years ago could have upgraded it so that it is just as fast as any computer of today, but with the Macintosh you basically would have to buy a new system. Also since IBM had used a third-party for its operating system other companies could license the operating system to make their own compatible operating systems, as well as any other software for it. Compatible hardware could easily be assembled. As well as peripherals and components that will improve the IBM compatible computer. From some of the common components, like CD- ROMs, Modems, Sound Cards, and Printers. You even have a choice from about 20 different styles of mice that you could use on your system, from three basic groups: Roller, Track balls, and Touch Pads. They have some other ones, like one that clips onto your monitor and shoots infrared beams across the screen to detect movements by your finger, and so it basically turns your monitor into a touch screen. As well as hand held ones that move the cursor based on the position of your hand. The Apple computer has usually always appealed to the school systems. With the IBM- compatible computers going more towards businesses and personal use. The main reasons behind this are that the Apple had many types of software directed towards children and helping them learn. They were also easier to use so that appealed to the school system, for they would be able to have children that are five years old be able to use a computer with no problem. The IBM computer went more with businesses, because of its ability to be upgraded and they would be able to get longer use out of it. They could more easily adapt an IBM-compatible computer to their way of doing things, just because of the many different software out there as well as its ease of adding or upgrading it capabilities. The IBM-compatible computers have been becoming increasingly more popular with the school systems, because of Apple going down hill and having less and less software available for it. The IBM and Macintosh computers have been in competition with each other for years, and each of them have their strong points. Apple dominated in the personal computer market when it first started, but when the IBM clone was created that started its downfall. Some of Apple's earlier decisions caused it to lose in the battle with IBM as well. Motorola as its chip manufacturer, caused them to be one step behind the Intel based IBM-compatibles. Not licensing out its software so that third parties could create software for it, was also a down fall for it. Now, that the IBM-compatible computer has a strong support it is very unlikely that Apple will be able to bring back a large user group for its personal computer, even though their computers are faster. f:\12000 essays\technology & computers (295)\Making Utilities for MSDOS.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Michael Sokolov English 4 Mr. Siedlecki February 1, 1996 Making Utilities for MS-DOS These days, when computers play an important role in virtually all aspects of our life, the issue of concern to many programmers is Microsoft's hiding of technical documentation. Microsoft is by far the most important system software developer. There can be no argument about that. Microsoft's MS-DOS operating system has become a de facto standard (IBM's PC-DOS is actually a licensed version of MS-DOS). And this should be so, because these systems are very well written. The people who designed them are perhaps the best software engineers in the world. But making a computer platform that is a de facto standard should imply a good deal of responsibility before the developers who make applications for that platform. In particular, proper documentation is essential for such a platform. Not providing enough documentation for a system that everyone uses can have disastrous results. Think of it, an operating system is useless by itself, its sole purpose is to provide services to applications. And who would be able to develop applications for an operating system if the documentation for that system is confidential and available only to the company that developed it? Obviously, only the company that has developed that operating system will be able to develop software for it. And this is a violation of the Antitrust Law. And now I start having a suspicion that this is happening with Microsoft's operating systems. It should be no secret to anyone that MS-DOS contains a lot of undocumented system calls, data structures and other features. Numerous books have been written on this subject (see bibliography). Many of them are vital to system programming. There is no way to write a piece of system software, such as a multitasker, a local area network, or another operating system extension, without knowing this undocumented functionality in MS-DOS. And, sure enough, Microsoft is using this functionality extensively when developing operating system extensions. For example, Microsoft Windows, Microsoft Network, and Microsoft CD-ROM Extensions (MSCDEX) rely heavily on the undocumented internals of MS-DOS. The reader can ask, "Why do they leave functionality undocumented?" To answer that question, we should look at what this "functionality" actually is. In MS-DOS, the undocumented "functionality" is actually the internal structures that MS-DOS uses to implement its documented INT 21h API. Any operating system must have some internal structures in which it keeps information about disk drives, open files, network connections, alien file systems, running tasks, etc. And MS-DOS (later I'll call it simply DOS) has internal structures too. These structures form the core of undocumented "functionality" in MS-DOS. This operating system also has some undocumented INT 21h API functions, but they serve merely to access the internal structures. These internal structures are extremely version-dependent. Each new major MS-DOS version up to 4.00 introduced a sMichael Sokolov English 4 Mr. Siedlecki February 1, 1996 Making Utilities for MS-DOS These days, when computers play an important role in virtually all aspects of our life, the issue of concern to many programmers is Microsoft's hiding of technical documentation. Microsoft is by far the most important system software developer. There can be no argument about that. Microsoft's MS-DOS operating system has become a de facto standard (IBM's PC-DOS is actually a licensed version of MS-DOS). And this should be so, because these systems are very well written. The people who designed them are perhaps the best software engineers in the world. But making a computer platform that is a de facto standard should imply a good deal of responsibility before the developers who make applications for that platform. In particular, proper documentation is essential for such a platform. Not providing enough documentation for a system that everyone uses can have disastrous results. Think of it, an operating system is useless by itself, its sole purpose is to provide services to applications. And who would be able to develop applications for an operating system if the documentation for that system is confidential and available only to the company that developed it? Obviously, only the company that has developed that operating system will be able to develop software for it. And this is a violation of the Antitrust Law. And now I start having a suspicion that this is happening with Microsoft's operating systems. It should be no secret to anyone that MS-DOS contains a lot of undocumented system calls, data structures and other features. Numerous books have been written on this subject (see bibliography). Many of them are vital to system programming. There is no way to write a piece of system software, such as a multitasker, a local area network, or another operating system extension, without knowing this undocumented functionality in MS-DOS. And, sure enough, Microsoft is using this functionality extensively when developing operating system extensions. For example, Microsoft Windows, Microsoft Network, and Microsoft CD-ROM Extensions (MSCDEX) rely heavily on the undocumented internals of MS-DOS. The reader can ask, "Why do they leave functionality undocumented?" To answer that question, we should look at what this "functionality" actually is. In MS-DOS, the undocumented "functionality" is actually the internal structures that MS-DOS uses to implement its documented INT 21h API. Any operating system must have some internal structures in which it keeps information about disk drives, open files, network connections, alien file systems, running tasks, etc. And MS-DOS (later I'll call it simply DOS) has internal structures too. These structures form the core of undocumented "functionality" in MS-DOS. This operating system also has some undocumented INT 21h API functions, but they serve merely to access the internal structures. These internal structures are extremely version-dependent. Each new major MS-DOS version up to 4.00 introduced a significant change to these structures. Applications using them will always be unportable and suffer compatibility problems. Every computer science textbook would teach you not to mingle with operating system internals. That's exactly why these internal structures are undocumented. This bring another question, "Why does Microsoft rely on these structures in its own applications?" To answer this question, we should take a look at an important class of software products called utilities. Utilities are programs that don't serve end users directly, but extend an operating system to help applications serve end users. To put it another way, utilities are helper programs. Perhaps the best way to learn when you have to mingle with DOS internals is to spend some time developing an utility for MS-DOS. A good example is SteelBox, an utility for on-the-fly data encryption. This development project have made me think about the use of DOS internals in the first place and it has inspired me to write this paper. Utilities like SteelBox, Stacker, DoubleSpace, new versions of SmartDrive, etc. need to do the following trick: register with DOS as device drivers, get request packets from it, handle them in a certain way, and sometimes forward them to the driver for another DOS logical drive. The first three steps are rather straightforward and do not involve any "illicit" mingling with MS-DOS internals. The problems begin in the last step. MS-DOS doesn't provide any documented "legal" way to find and to call the driver for a logical drive. However, MS-DOS does have internal structures, called Disk Parameter Blocks (DPBs) which contain all information about all logical drives, including the pointers to their respective drivers. If you think of it, it becomes obvious that MS-DOS must have some internal structures like DPBs. Otherwise how would it be able to service the INT 21h API requests? How would it be able to locate the driver for a logical drive it needs to access? Many people have found out about DPBs in some way (possibly through disassembly of DOS code). In the online community there is a very popular place for information obtained through reverse engineering, called The MS-DOS Interrupt List, maintained by Ralf Brown. This list is for everyone's input, and the people who reverse engineer Microsoft's operating systems often send their discoveries to Ralf Brown, who includes them into his list. The DPB format and the INT 21h call used to get pointers to DPBs are also in Interrupt List. As a result, many programmers, including me, have used this information in their utilities without much thinking. However, this is not a good thing to do. DPBs exist since the first release of MS-DOS as IBM PC-DOS version 1.00, but the DPB format has changed three times throughout the history. The first change occured in MS-DOS version 2.00, when the hard disk support, the installable device drivers and the UNIX-like nested directories were introduced. The second change occured in MS-DOS version 3.00, when the array of Current Directory Structures (CDSs), a new internal structure, was introduced to support local area networks and JOIN/SUBST commands. The third change occured in MS-DOS version 4.00, when 32-bit sector addressing was introduced and an oversight with storing the number of sectors in a File Allocation Table (FAT) was fixed. The reader can see that each new major MS-DOS version up to 4.00 introduced a change in the DPB format. And this is typical with all MS-DOS undocumented internal structures. Although one can probably ignore DOS versions earlier than 3.10, he still would have to deal with two different DPB formats. And prior to DOS version 5.00, where DPBs were finally documented, no one could be sure that a new DOS version wouldn't change the DPB format once again. In the first version of SteelBox, my utility that needs to know about DPBs in order to do its work, I simply compared the DOS version number obtained via INT 21h/AH=30h with 4.00. If the DOS version was earlier than 4.00, I assumed that it has the same DPB format as IBM PC-DOS versions 3.10-3.30. If the DOS version was 4.00 or later, I assumed that it has the same DPB format as IBM PC-DOS version 4.xx. However, there are problems with such assumptions. First, there are some versions of MS-DOS other than IBM PC-DOS, and some of them have their internal structures different from those of standard MS-DOS and PC-DOS. For example, European MS-DOS 4.00 returns the same version number as IBM PC-DOS version 4.00, but its internal structures much more closely resemble that of PC-DOS version 3.xx. Second, prior to Microsoft's documenting of DPBs in MS-DOS version 5.00, there was no guarantee that the DPB format wouldn't change with a new DOS version. When I was developing a new version of SteelBox, I started to think about how to use DPBs properly and in a version-independent manner. I justified the use of DOS internals in the first place because I know that a lot of Microsoft's own utilities use them extensively. The examples are MS-DOS external commands like SHARE, JOIN, and SUBST, Microsoft Network, Microsoft Windows, Microsoft CD-ROM Extensions (MSCDEX), etc. Before we go any further, it should be noted that we mustn't be dumping unfairly on Microsoft. Originally I thought that DOS internals are absolutely safe to use and that Microsoft doesn't document them intentionally in order to get an unfair advantage over its competitors. My reasoning for this was that Microsoft's own utilities have never stopped working with a new DOS version. To find the magic of "correct" use of DOS internals, I started disassembling Microsoft's utilities. First I looked at three DOS external commands, SHARE, JOIN, and SUBST. All three programs check for exact DOS version number match. This means that they can work only with one specific version of MS-DOS. This makes sense, given that these utilities are bundled with MS-DOS and can be considered to be parts of MS-DOS. One of the utilities, SHARE, unlike other DOS external commands, accesses the DOS kernel variables by absolute offsets in DOSGROUP, the DOS kernel data segment, in addition to getting pointers to certain DOS internal structures and accessing their fields. SHARE not only checks the MS-DOS version number, but also checks the flag at offset 4 in DOSGROUP. In DOS Internals, Geoff Chappell says that this flag indicates the format (or style) of DOSGROUP layout (501). If you look at the MS-DOS source code (I'll explain how to do it in a few paragraphs), you'll see that programs like SHARE access the kernel variables in the following way: The kernel modules defining these variables in DOSGROUP are linked in with SHARE's own modules. Since the assembler always works the same way, the DOS kernel variables get the same offsets in the SHARE's copy of DOSGROUP as in the DOS kernel's copy. When SHARE needs to access a DOS kernel variable, it loads the real DOSGROUP segment into a segment register, tells the assembler that the segment register points to SHARE's own copy of DOSGROUP, and accesses the variable through that segment register. Although the segment register points to one copy of DOSGROUP and assembler thinks that it points to another one, everything works correctly because they have the same format. The reader can drawn the following conclusion from this aside: MS-DOS designers have made the MS-DOS internal structures accessible to other programs only for DOS'own use (since linking DOS modules in with a program is acceptable only for the parts of MS-DOS itself). Having seen that DOS external commands are not a good example for a program that wants to be compatible with all DOS versions, I turned to Microsoft Network. One of its utilities, REDIR, is very similar to SHARE in its operation. Like SHARE, it accesses the DOS kernel variables by absolute offsets. I thought that unlike SHARE, REDIR is not tied to a specific DOS version. Unfortunatelly, I wasn't able to disassemble it, because as a high school student, I don't have a copy of Microsoft Network. However, Geoff Chappell says that it has separate versions for different versions of DOS, just like SHARE. Therefore, I turned to another utility again. My next stop was MSCDEX, the utility for accessing the High Sierra and ISO-9660 file systems used by CD-ROMs. Unlike SHARE and REDIR, MSCDEX is not tied to one specific DOS version. I'm using MSCDEX version 2.21 with MS-DOS version 5.00, but the same version of MSCDEX can be used with PC-DOS version 3.30. However, it accesses the DOS kernel variables by absolute offsets in DOSGROUP, just like SHARE and REDIR. Of course, my question was "How does it do that in a version-independent manner?" When I disassembled it, I saw that it takes the flag at offset 4 in DOSGROUP and uses it to determine the absolute offsets of all the variables it needs. If this flag equals 0, MSCDEX assumes that all offsets it's interested in are the same as in DOS versions 3.10-3.30. If this flag equals 1, MSCDEX assumes that all offsets it's interested in are the same as in DOS versions 4.00-5.00. For all other values of this flag MSCDEX refuses to load. Sharp-eyed readers might notice that this check already makes MSCDEX potentially incompatible with future DOS versions. The comments in the source code for MS-DOS version 3.30 (DOS\MULT.INC file) refer to MSCDEX, therefore, it had existed at the time of MS-DOS version 3.30. It is very doubtful that anyone, including the author of MSCDEX, could know what offsets would the kernel variables in DOS version 4.00 have at that time. If this is true, an MSCDEX version that predates MS-DOS version 4.00 won't run under DOS versions 4.00 and later. MSCDEX uses the flag at offset 4 in DOSGROUP to determine not only the absolute offsets of the kernel variables, but also the "style" of all other DOS internals that had changed with DOS version 4.00. My first thought was that I can use this flag in my utilities when I need to cope with different "styles" of DOS internals. However, my next discovery really surprised me and gave me a real understanding of what I'm doing when I mingle with DOS internals. MSCDEX version 2.21 refuses to run under DOS versions 6.00 and later. So much for the idea that "Microsoft's own utilities have never stopped working with a new DOS version." In fact, Geoff Chappell refers to this in DOS Internals (501). The last utility I looked at was Microsoft SmartDrive version 4.00, which is bundled with Microsoft Windows version 3.10. This utility also uses the DOS internal structures, including the version-dependent ones. However, unlike MSCDEX, SmartDrive doesn't have a "top" DOS version number. It compares the DOS version number with 4.00 and assumes that DOS similar to versions 3.10-3.30 if it's lower than 4.00 and to versions 4.00-5.00 if it's 4.00 or higher. SmartDrive assumes that all future DOS versions will be compatible with MS-DOS version 5.00 at the level of the internal structures. The lack of clear pattern in the usage of the undocumented DOS internal structures by Microsoft's own utilities made me think seriously about the possibility of safe use of the DOS internals in the first place. Originally I thought that Microsoft has some internal confidential document that explains how to use the DOS internals safely, and that anyone having that magic document can use the undocumented DOS internals as safely as normal documented INT 21h API. However, the evidence I have obtained through reverse engineering of Microsoft's utilities puts the existence of that magic document under question. In Undocumented DOS Andrew Schulman notes that it is possible that on some occasions Microsoft's programmers have found out about the MS-DOS internals not from the source code or some other internal confidential documents, but from general PC folklore, just like third-party software developers. For example, the MWAVABSI.DLL file from the Microsoft Anti-Virus provides a function called AIO_GetListofLists(). This function calls INT 21h/AH=52h to get the pointer to one extremely important DOS internal structure. In the MS-DOS source code this structure is called SysInitVars. However, in Ralf Brown's Interrupt List and in general PC folklore is called the List of Lists. This is an indication that Microsoft's programmers sometimes act just like third-party software developers (Schulman et al., Undocumented DOS, 44). On several occasions I have made references to the MS-DOS source code. However, most programmers know that the MS-DOS source code is unavailable to non-Microsoft employees. Therefore, before we go any further, I need to explain how could I look at the MS-DOS source code. Microsoft gives it to certain companies, mostly Original Equipment Manufactures (OEMs). Some people can claim that they are OEMs and get the Microsoft's documents available only to OEMs (however, this costs a lot of money). And then some people who don't care too much about laws start distributing the confidential information they have. This is especially easy in Russia, where copyright laws are not enforced. So one way or another, knowledge of some parts of MS-DOS source code spreads among the people. The MS-DOS OEM Adaptation Kit (OAK) contains commented source code for some MS-DOS modules and include files and .OBJ files made from some other modules. Let's summarize what we've seen so far. MS-DOS, like any other operating system, has internal structures. Every computer science textbook would teach you not to rely on an operating system's internals. In MS-DOS, the internal structures are undocumented. Microsoft's own utilities do rely on them. By reverse engineering these utilities, looking at the MS-DOS source code, and thinking the problem through one can come to the conclusion that there is absolutely no safe way of using the MS-DOS internal structures. The only proper way of using them is not using them at all. Not later than I have come to this conclusion, my SteelBox development project returned me back to the reality. No matter how bad it is to use of the MS-DOS internals, utility developers like me have to do it because they have no other choice. Now I'm almost sure that this is precisely why Microsoft uses the MS-DOS internals itself. Before we go any further, I need to clarify one important detail. Once a programmer asked Microsoft to document the INT 2Fh/AH=11h interface, generally known as the network redirector interface. Microsoft responded: The INT 2fh interface to the network is an undocumented interface. Only INT 2fh, function 1100h (get installed state) of the network services is documented. Some third parties have reverse engineered and documented the interface (i.e., "Undocumented DOS" by Shulman [sic], Addison-Wesley), but Microsoft provides absolutely no support for programming on that API, and we do not guarantee that the API will exist in future versions of MS-DOS. This sounds like Microsoft saying, "Here's where you get the info, but you better not use it." (Schulman et al., Undocumented DOS, 495). Some people might think that Microsoft has internal confidential documents describing the MS-DOS internals much better than Andrew Schulman's Undocumented DOS, but there are indications that the MS-DOS source code is the only "document" Microsoft has (I'll address this issue in a few paragraphs). Perhaps the Microsoft's programmers themselves use the same documentation as third parties. So far we have seen that MS-DOS is not a perfect operating system, and it gives utility developers no other choice but to use its undocumented version-dependent internals. The reader might ask, "what can we do about it?" First of all, some of the former undocumented DOS functionality was documented in DOS version 5.00. The reason for that probably was that some INT 21h functions that were used by DOS external commands like PRINT don't actually deal with any DOS internals at all, and Microsoft had simply underestimated the usefulness of these functions originally. Microsoft has even documented the DPBs. However, Microsoft's documentation says that the DPBs are available only in DOS versions 5.00 and later, but the reader should remember that the DPB format has changed several times throughout the history. So in this case Microsoft even restricted themselves in the ability of making changes in MS-DOS by documenting the DPBs. However, there are still a lot of undocumented internals in MS-DOS. It should be noted that documenting them is out of question. This would make it impossible to make significant changes in MS-DOS, thereby stalling its enhancement. In Undocumented DOS Andrew Schulman suggests that Microsoft could make an add-in to MS-DOS that would provide "clean" documented services that would eliminate the need for the use of DOS internals. Once Microsoft actually did this, when it introduced the IFSFUNC utility in MS-DOS version 4.00. This utility converted the "dirty" and extremely version-dependent redirector interface into a device-driver-like interface. However, this utility was removed from MS-DOS versions 5.00 and later (I'll explain why in a few paragraphs). Fortunately, the ill-fated IFSFUNC utility was not the only effort to enhance MS-DOS. In Microsoft Windows versions 3.00 through 3.11, there is a component called Win386. It has got its name from Windows/386, its ancestor. In early beta releases of Microsoft's Chicago operating system this component was called DOS386. When Chicago was renamed into Windows 95, this component was given uninteresting name VMM32. Finally, the beta release of Microsoft C/C++ Compiler version 7.00 included this component from Microsoft Windows under the name MSDPMI. I think that the best name for this component is DOS386, so I'll call it this way. Probably the reader would ask, "What this component is?" DOS386 is a multitasking protected-mode operating system. A close inspection of DOS386 reveals that it has almost nothing to do with Windows, and has a lot to do with DOS (that's why I prefer the name DOS386 over Win386). Two of DOS386's subcomponents, DOSMGR and IFSMGR, are perhaps the heaviest users of DOS internals. These modules know a lot about the internals of MS-DOS, and they provide their own interfaces which in fact can help an utility avoid using DOS internals. For example, let's return to our SteelBox utility. This utility needs to access a file from inside an INT 21h call. Most DOS programmers know that DOS INT 21h API is non-reentrant. It means that no INT 21h calls can be made while an INT 21h call is already being serviced. Therefore, an utility like SteelBox would have to play tricks with DOS internals with all the consequences. On the other hand, DOS386's IFSMGR subcomponent provides an interface that replaces INT 21h. Unfortunately, IFSMGR is documented only in the Windows 95 Device Development Kit (DDK), and I don't have a copy of it yet. However, it is quite possible that the IFSMGR's interface is reentrant. If it is, all problems with SteelBox would be immediately solved, and it won't contain a single undocumented DOS call. Keep in mind, however, that DOS386 is relatively new, and perhaps its current version doesn't provide all the desired functionality. But certainly DOS386 is definitely a good foundation for a new operating system. Although I definitely don't want to overblame Microsoft, I have to say some unpleasant truth about this company. In their run for profit, people at Microsoft violates some principles of free enterprize. In other words, they try to make a monopoly. One of the unfair things Microsoft does is called discriminatory documentation. Although the source code for MS-DOS, Microsoft Network, and other Microsoft products is supposedly unavailable to anyone, Microsoft has made the source code of some utilities available to selected vendors (Schulman et al., Undocumented DOS, 495). Another example is the deliberate incompatibility of some Microsoft products with Digital Research's DR-DOS. Some programs, including Microsoft Windows version 3.10 beta and Microsoft C Compiler version 6.00, contain special code with sole purpose of making them incompatible with DR-DOS and other DOS workalikes. Although I'm definitely not a supporter of DOS workalikes, I think that Microsoft should use fair methods of competition. Finally, there is a big problem with Microsoft's packaging of MS-DOS and DOS386. The most important problem with DOS386 is that it's currently available to users only as Win386 in Microsoft Windows. Furthermore, the usual Windows technical documentation (except the DDK) doesn't even mention the existence of Win386, because it's actually not a part of Windows. As a result, an amasing number of programmers don't even know about DOS386 (or Win386), and many of those how do greatly underestimate its tremendous importance. Now Windows 95 comes into play. In this package, MS-DOS, DOS386, and Windows are thrown into one melting-pot. First of all, the integration of MS-DOS and DOS386 is a very good step. Given the volatility of DOS internals, the DOSMGR subcomponent of DOS386 (which, remember, is perhaps the heaviest user of DOS internals) cetainly should be tied to one specific DOS version. However, the tie between DOS/DOS386 and Windows is largely artificial. Try a simple experiment. Rename KRNL386.EXE file in your WINDOWS\SYSTEM directory into something else, and put something else (COMMAND.COM fits nicely) into that directory under the name KRNL386.EXE. And then try to run Windows. But instead of running Windows, this would load and activate Win386 without loading Windows. And there is no magic in this simple experiment. KRNL386.EXE is the first module of Windows, and Win386 runs it when it completes its initialization. By putting something else in place of KRNL386.EXE, one can break the artificial tie between Windows and DOS386. At some point of time Microsoft probably throught of making a version of DOS386 which would not be tied to Windows. There was an utility called MSDPMI in the beta release of Microsoft C/C++ Compiler version 7.00, which was that very DOS386 without Windows. But now Microsoft is abandoning MS-DOS and everything else that is not Windows. Microsoft tries to persuade users that Windows 95 doesn't contain a DOS component, but this is not true. It is simply tied into Windows. Now let's summarize the above. Microsoft is ignoring the minority users who don't like Windows and who want to use MS-DOS and DOS386 without Windows, because Microsoft cares only about its profit. One person cannot stop them doing that. Therefore, we, the programmers, should unite. If I call Microsoft alone, no one would listen to me. But if thousands of us do it together, we might achieve something. If you have any questions or suggestions about creating an association of programmers against Microsoft, please send E-mail to Michael Sokolov at gq696@cleveland.freenet.edu. Bibliography Brown, Ralf. The MS-DOS Interupt List. Not published on paper, available online for free. Chappell, Geoff. DOS Internals. New York: Addison-Wesley Publishing Company, 1994. Microsoft Corporation. Microsoft Windows Device Development Kit. Computer software. Redmond: Microsoft, 1990. Pietrek, Matt. Windows Internals: The Implementation of the Windows Operating Environment. New York: Addison-Wesley Publishing Company, 1993. Schulman, Andrew. , Ralf Brown, David Maxey, Raymond J. Michels, Jim Kyle. Undocumented DOS: A Programmer's Guide to Reserved MS-DOS Functions and Data Structures. New York: Addison-Wesley Publishing Company, 1994. Schulman, Andrew. , David Maxey, Matt Pietrek. Undocumented Windows: A Programmer's Guide to Reserved Microsoft Windows API Functions. New York: Addison-Wesley Publishing Company, 1992. f:\12000 essays\technology & computers (295)\Meet Mr Computer.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Have you ever seen a computer in a store and said, "Whoa! What a chick!" ? I am sure you would have, if you were familiar with the new 16xCD-ROM and extra wide SCSI-2 9.0 GB hard drive it features, or if you knew about the dual 225 MHz Pentium pro MMX chips blazing up its performance. To tell you all about computers, it takes a total computer nut like me. After working with computers almost all my life, I can tell you that a computer is an electrical device, without which a guy like me probably cannot survive. If you have no idea of what I am beeping about, read on. Experts, I report no error in reading further. Computers are very productive tools in our everyday lives. To maximize the utility of a computer, what you need to do is get going with the program. To do that, the minimum system requirements are a C.P.U. or the central processing unit, a keyboard, a monitor, a mouse, and if you want, a printer and a CD - ROM drive. The C.P.U. is that part of a computer that faithfully does what his master tells him to do, with the help of input devices like a keyboard or a mouse. After all this so called sophisticated, next generation equipment, you need some sort of software. Software is a set of instructions to the C.P.U. from a source such as a floppy disk, a hard drive or a CD - ROM drive, in zillions of 1's and 0's. Each of these tiny little instructions makes up a bit. Then they assemble to form a byte. Bytes make up a program, which you run to use the computer's various applications. Now that you know more about computers than Einstein did, let me tell you something more about them, so that you will beat the President in the field of computing. In your computer, you require a good amount of RAM, which is there to randomly accesses memory. That is required to speed up your computer, so that it gives you more error messages in less time. The faster the error messages it gives, the faster you call technical help at 1-800-NeedHelp. The service is open 24 hours a day, but to get through, you will have to wait, at least, until the next Halley's comet passes by. The only thing now required, for you to become the master of this part of the world, is to have a very BOLD determination to become a computer geek. Since you have learnt everything about the basics, I would like to transfer command to the owner's manual, that came with your computer, to help you master the specific applications. While learning the basic fifth generation of PCs, let's not forget the choice of the new generation, network computing on the Internet and the world wide web. Internet is probably the most important development in the history of human beings, since the evolution of the Macintosh. The Internet can do all the projects and presentations, your teachers demand of you. It can also buy you some pizzas from Pizza-Hut and help you book a ticket for your flight to Ithaca. But as every benefit has a big loophole, in this case the problem is, once you dial up your Internet service provider, you are welcomed by a busy signal! So boy, are you glad after half an hour or so, that you finally meet with success getting on-line. After you go on-line, you open the Netscape Navigator browser to go find what you want. You go to a search engine, and then another search engine, and then yet another search engine, and then you finally find out that what you want is just what you don't get in this terrible world of advertisement. So you quit and go join a chat group, talking with the weirdest of people you can think of, thinking of the fun you are having in this beautiful world, without knowing who it is that you are talking to, and forgetting the fact that the $$$ meter is rising and climbing and mounting every hour you are on-line. Finally, you know that the typical use of computers is not only for typing and calculating, but also for learning the masterful art of patience and how to cope with the mistakes others make without cursing them. Life is not possibly possible without this abnormally useful machine in these good old 90's. Since all that starts well, ends well, to end this reading you might want to close this page with your thumb and your forefinger, or else you might get an error message, and then you will have to read this all over again. f:\12000 essays\technology & computers (295)\Microarchitecture of the Pentium Pro Processor.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A Tour of the Pentium(r) Pro Processor Microarchitecture Introduction One of the Pentium(r) Pro processor's primary goals was to significantly exceed the performance of the 100MHz Pentium(r) processor while being manufactured on the same semiconductor process. Using the same process as a volume production processor practically assured that the Pentium Pro processor would be manufacturable, but it meant that Intel had to focus on an improved microarchitecture for ALL of the performance gains. This guided tour describes how multiple architectural techniques - some proven in mainframe computers, some proposed in academia and some we innovated ourselves - were carefully interwoven, modified, enhanced, tuned and implemented to produce the Pentium Pro microprocessor. This unique combination of architectural features, which Intel describes as Dynamic Execution, enabled the first Pentium Pro processor silicon to exceed the original performance goal. Building from an already high platform The Pentium processor set an impressive performance standard with its pipelined, superscalar microarchitecture. The Pentium processor's pipelined implementation uses five stages to extract high throughput from the silicon - the Pentium Pro processor moves to a decoupled, 12-stage, superpipelined implementation, trading less work per pipestage for more stages. The Pentium Pro processor reduced its pipestage time by 33 percent, compared with a Pentium processor, which means the Pentium Pro processor can have a 33% higher clock speed than a Pentium processor and still be equally easy to produce from a semiconductor manufacturing process (i.e., transistor speed) perspective. The Pentium processor's superscalar microarchitecture, with its ability to execute two instructions per clock, would be difficult to exceed without a new approach. The new approach used by the Pentium Pro processor removes the constraint of linear instruction sequencing between the traditional "fetch" and "execute" phases, and opens up a wide instruction window using an instruction pool. This approach allows the "execute" phase of the Pentium Pro processor to have much more visibility into the program's instruction stream so that better scheduling may take place. It requires the instruction "fetch/decode" phase of the Pentium Pro processor to be much more intelligent in terms of predicting program flow. Optimized scheduling requires the fundamental "execute" phase to be replaced by decoupled "dispatch/execute" and "retire" phases. This allows instructions to be started in any order but always be completed in the original program order. The Pentium Pro processor is implemented as three independent engines coupled with an instruction pool as shown in Figure 1 below. What is the fundamental problem to solve? Before starting our tour on how the Pentium Pro processor achieves its high performance it is important to note why this three- independent-engine approach was taken. A fundamental fact of today's microprocessor implementations must be appreciated: most CPU cores are not fully utilized. Consider the code fragment in Figure 2 below: The first instruction in this example is a load of r1 that, at run time, causes a cache miss. A traditional CPU core must wait for its bus interface unit to read this data from main memory and return it before moving on to instruction 2. This CPU stalls while waiting for this data and is thus being under-utilized. While CPU speeds have increased 10-fold over the past 10 years, the speed of main memory devices has only increased by 60 percent. This increasing memory latency, relative to the CPU core speed, is a fundamental problem that the Pentium Pro processor set out to solve. One approach would be to place the burden of this problem onto the chipset but a high-performance CPU that needs very high speed, specialized, support components is not a good solution for a volume production system. A brute-force approach to this problem is, of course, increasing the size of the L2 cache to reduce the miss ratio. While effective, this is another expensive solution, especially considering the speed requirements of today's L2 cache SRAM components. Instead, the Pentium Pro processor is designed from an overall system implementation perspective which will allow higher performance systems to be designed with cheaper memory subsystem designs. Pentium Pro processor takes an innovative approach To avoid this memory latency problem the Pentium Pro processor "looks-ahead" into its instruction pool at subsequent instructions and will do useful work rather than be stalled. In the example in Figure 2, instruction 2 is not executable since it depends upon the result of instruction 1; however both instructions 3 and 4 are executable. The Pentium Pro processor speculatively executes instructions 3 and 4. We cannot commit the results of this speculative execution to permanent machine state (i.e., the programmer-visible registers) since we must maintain the original program order, so the results are instead stored back in the instruction pool awaiting in-order retirement. The core executes instructions depending upon their readiness to execute and not on their original program order (it is a true dataflow engine). This approach has the side effect that instructions are typically executed out-of-order. The cache miss on instruction 1 will take many internal clocks, so the Pentium Pro processor core continues to look ahead for other instructions that could be speculatively executed and is typically looking 20 to 30 instructions in front of the program counter. Within this 20- to 30- instruction window there will be, on average, five branches that the fetch/decode unit must correctly predict if the dispatch/execute unit is to do useful work. The sparse register set of an Intel Architecture (IA) processor will create many false dependencies on registers so the dispatch/execute unit will rename the IA registers to enable additional forward progress. The retire unit owns the physical IA register set and results are only committed to permanent machine state when it removes completed instructions from the pool in original program order. Dynamic Execution technology can be summarized as optimally adjusting instruction execution by predicting program flow, analysing the program's dataflow graph to choose the best order to execute the instructions, then having the ability to speculatively execute instructions in the preferred order. The Pentium Pro processor dynamically adjusts its work, as defined by the incoming instruction stream, to minimize overall execution time. Overview of the stops on the tour We have previewed how the Pentium Pro processor takes an innovative approach to overcome a key system constraint. Now let's take a closer look inside the Pentium Pro processor to understand how it implements Dynamic Execution. Figure 3 below extends the basic block diagram to include the cache and memory interfaces - these will also be stops on our tour. We shall travel down the Pentium Pro processor pipeline to understand the role of each unit: •The FETCH/DECODE unit: An in-order unit that takes as input the user program instruction stream from the instruction cache, and decodes them into a series of micro-operations (uops) that represent the dataflow of that instruction stream. The program pre-fetch is itself speculative. •The DISPATCH/EXECUTE unit: An out-of-order unit that accepts the dataflow stream, schedules execution of the uops subject to data dependencies and resource availability and temporarily stores the results of these speculative executions. •The RETIRE unit: An in-order unit that knows how and when to commit ("retire") the temporary, speculative results to permanent architectural state. •The BUS INTERFACE unit: A partially ordered unit responsible for connecting the three internal units to the real world. The bus interface unit communicates directly with the L2 cache supporting up to four concurrent cache accesses. The bus interface unit also controls a transaction bus, with MESI snooping protocol, to system memory. Tour stop #1: The FETCH/DECODE unit. Figure 4 shows a more detailed view of the fetch/decode unit: Let's start the tour at the Instruction Cache (ICache), a nearby place for instructions to reside so that they can be looked up quickly when the CPU needs them. The Next_IP unit provides the ICache index, based on inputs from the Branch Target Buffer (BTB), trap/interrupt status, and branch-misprediction indications from the integer execution section. The 512 entry BTB uses an extension of Yeh's algorithm to provide greater than 90 percent prediction accuracy. For now, let's assume that nothing exceptional is happening, and that the BTB is correct in its predictions. (The Pentium Pro processor integrates features that allow for the rapid recovery from a mis-prediction, but more of that later.) The ICache fetches the cache line corresponding to the index from the Next_IP, and the next line, and presents 16 aligned bytes to the decoder. Two lines are read because the IA instruction stream is byte-aligned, and code often branches to the middle or end of a cache line. This part of the pipeline takes three clocks, including the time to rotate the prefetched bytes so that they are justified for the instruction decoders (ID). The beginning and end of the IA instructions are marked. Three parallel decoders accept this stream of marked bytes, and proceed to find and decode the IA instructions contained therein. The decoder converts the IA instructions into triadic uops (two logical sources, one logical destination per uop). Most IA instructions are converted directly into single uops, some instructions are decoded into one-to-four uops and the complex instructions require microcode (the box labeled MIS in Figure 4, this microcode is just a set of preprogrammed sequences of normal uops). Some instructions, called prefix bytes, modify the following instruction giving the decoder a lot of work to do. The uops are enqueued, and sent to the Register Alias Table (RAT) unit, where the logical IA-based register references are converted into Pentium Pro processor physical register references, and to the Allocator stage, which adds status information to the uops and enters them into the instruction pool. The instruction pool is implemented as an array of Content Addressable Memory called the ReOrder Buffer (ROB). We have now reached the end of the in-order pipe. Tour stop #2: The DISPATCH/EXECUTE unit The dispatch unit selects uops from the instruction pool depending upon their status. If the status indicates that a uop has all of its operands then the dispatch unit checks to see if the execution resource needed by that uop is also available. If both are true, it removes that uop and sends it to the resource where it is executed. The results of the uop are later returned to the pool. There are five ports on the Reservation Station and the multiple resources are accessed as shown in Figure 5 below: The Pentium Pro processor can schedule at a peak rate of 5 uops per clock, one to each resource port, but a sustained rate of 3 uops per clock is typical. The activity of this scheduling process is the quintessential out-of-order process; uops are dispatched to the execution resources strictly according to dataflow constraints and resource availability, without regard to the original ordering of the program. Note that the actual algorithm employed by this execution-scheduling process is vitally important to performance. If only one uop per resource becomes data-ready per clock cycle, then there is no choice. But if several are available, which should it choose? It could choose randomly, or first-come-first-served. Ideally it would choose whichever uop would shorten the overall dataflow graph of the program being run. Since there is no way to really know that at run-time, it approximates by using a pseudo FIFO scheduling algorithm favoring back-to-back uops. Note that many of the uops are branches, because many IA instructions are branches. The Branch Target Buffer will correctly predict most of these branches but it can't correctly predict them all. Consider a BTB that's correctly predicting the backward branch at the bottom of a loop: eventually that loop is going to terminate, and when it does, that branch will be mispredicted. Branch uops are tagged (in the in-order pipeline) with their fallthrough address and the destination that was predicted for them. When the branch executes, what the branch actually did is compared against what the prediction hardware said it would do. If those coincide, then the branch eventually retires, and most of the speculatively executed work behind it in the instruction pool is good. But if they do not coincide (a branch was predicted as taken but fell through, or was predicted as not taken and it actually did take the branch) then the Jump Execution Unit (JEU) changes the status of all of the uops behind the branch to remove them from the instruction pool. In that case the proper branch destination is provided to the BTB which restarts the whole pipeline from the new target address. Tour stop #3: The RETIRE unit Figure 6 shows a more detailed view of the retire unit: The retire unit is also checking the status of uops in the instruction pool - it is looking for uops that have executed and can be removed from the pool. Once removed, the uops' original architectural target is written as per the original IA instruction. The retirement unit must not only notice which uops are complete, it must also re-impose the original program order on them. It must also do this in the face of interrupts, traps, faults, breakpoints and mis- predictions. There are two clock cycles devoted to the retirement process. The retirement unit must first read the instruction pool to find the potential candidates for retirement and determine which of these candidates are next in the original program order. Then it writes the results of this cycle's retirements to both the Instruction Pool and the RRF. The retirement unit is capable of retiring 3 uops per clock. Tour stop #4: BUS INTERFACE unit Figure 7 shows a more detailed view of the bus interface unit: There are two types of memory access: loads and stores. Loads only need to specify the memory address to be accessed, the width of the data being retrieved, and the destination register. Loads are encoded into a single uop. Stores need to provide a memory address, a data width, and the data to be written. Stores therefore require two uops, one to generate the address, one to generate the data. These uops are scheduled independently to maximize their concurrency, but must re-combine in the store buffer for the store to complete. Stores are never performed speculatively, there being no transparent way to undo them. Stores are also never re- ordered among themselves. The Store Buffer dispatches a store only when the store has both its address and its data, and there are no older stores awaiting dispatch. What impact will a speculative core have on the real world? Early in the Pentium Pro processor project, we studied the importance of memory access reordering. The basic conclusions were as follows: •Stores must be constrained from passing other stores, for only a small impact on performance. •Stores can be constrained from passing loads, for an inconsequential performance loss. •Constraining loads from passing other loads or from passing stores creates a significant impact on performance. So what we need is a memory subsystem architecture that allows loads to pass stores. And we need to make it possible for loads to pass loads. The Memory Order Buffer (MOB) accomplishes this task by acting like a reservation station and Re-Order Buffer, in that it holds suspended loads and stores, redispatching them when the blocking condition (dependency or resource) disappears. Tour Summary It is the unique combination of improved branch prediction (to offer the core many instructions), data flow analysis (choosing the best order), and speculative execution (executing instructions in the preferred order) that enables the Pentium Pro processor to deliver its performance boost over the Pentium processor. This unique combination is called Dynamic Execution and it is similar in impact as "Superscalar" was to previous generation Intel Architecture processors. While all your PC applications run on the Pentium Pro processor, today's powerful 32-bit applications take best advantage of Pentium Pro processor performance. And while our architects were honing the Pentium Pro processor microarchitecture, our silicon technologists were working on an advanced manufacturing process - the 0.35 micron process. The result is that the initial Pentium Pro Processor CPU core speeds range up to 200MHz. f:\12000 essays\technology & computers (295)\Microprocessors.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Inside of the mysterious box that perches ominously on your desk is one of the marvels of the modern world. This marvel is also a total enigma to most of the population. This enigma is, of course, the microprocessor. To an average observer a microprocessor is simply a small piece of black plastic that is found inside of almost everything. In How Microprocessors Work they are defined as a computer's central processing unit, usually contained on a single integrated circuit (Wyant and Hammerstrom, 193). In plain English this simply means that a microprocessor is the brain of a computer and it is only on one chip. Winn L. Rosch compares them to being an electronic equivalent of a knee-joint that when struck with the proper digital stimulus will react in the exact same way each time (Rosch,37). More practically a microprocessor is multitudinous transistors squeezed onto as small a piece of silicon as possible to do math problems as fast as possible. Microprocessors are made of many smaller components which all work together to make the chip work. A really good analogy for the way the inner workings of a chip operate can be found in How Microprocessors Work. In their book, Wyant and Hammerstrom describe a microprocessor as a factory and all of the inner workings of the chip as the various parts of a factory (Wyant and Hammerstrom, 71-103). Basically a microprocessor can be seen as a factory because like a factory it is sent something and is told what to do with it. The microprocessor factory processes information. This most basic unit of this information is the bit. A bit is simply on or off. It is either a one or a zero. Bits are put into 8 bit groups called bytes. The number 8 is used because it is offers enough combinations to encode our entire language (2^8=256). If only 4 bits are used only (2^4=16) combinations would be possible. This is enough to encode 9 digits and some operations. (The first microprocessors powered calculators) A half byte is called a nibble and consists of 4 bits. In the world of computer graphics the combination of bits is easier seen. In computer graphics bits are used to make color combinations, thus with more bits more colors are possible. Eight bit graphics will display 256 colors, 16 bit will display 65,536, and 24 bit graphics will display 16.7 million colors. The bus unit is described as the shipping dock because it controls data transfers, and functions between the individual pieces of the chip. The part of the chip that performs the role of a purchasing department is called the prefetch unit. It's job is to make certain that enough data is on hand to keep the chip busy. The decode unit performs the role of a receiving department. It breaks done complicated instructions from the rest of the computer into smaller pieces that the chip can manipulate more readily. The control unit is compared to the person who oversees the workings of the entire factory. It is the part of the chip that keeps all the other parts working together and coordinates their actions. The arithmetic logic unit is compared to the assembly line of the factory. It is the part of the microprocessor that performs the math operations. It consists of circuitry that performs the math and the registers which hold the necessary information. The memory management unit is likened to the shipping department of this digital factory. It is responsible for sending data to the bus unit. Together all of the individual pieces support each other to make this digital symbiosis work as fast as possible. To an outsider, computer nerd vernacular and all other forms of computer people esoteric may or may not be considered frightening. Probably the most confused term in microprocessor performance is Megahertz (MHz). Basically these are millions of cycles per second. This is a measurement of chip speed but is better considered the RPM of the chip (Knorr, 135). For example a 486 100 MHz processor cannot touch the speed of a Pentium running at only 60 MHz. This is because the Pentium packs more power and can do more per clock cycle. The computer bus is the data line that connects the microprocessor the rest of the computer. The width of the bus (how many bits it consists of) controls how much data can be sent to the chip per clock cycle. MIPS or millions of instructions per second is simply how many instructions the chip can perform in one second divided by 1,000,000. RISC is a commonly used term in the computing world also. It is an acronym for Reduced Instruction Set Computer. Chips that incorporate RISC technology basically rely on simplicity to enhance performance. Motorola chips use this technology. The opposite of RISC is CISC which stands for Complex Instruction Set Computer. These chips use more hardwired instructions to speed up the processing process. All Intel PC products fall into this category. Pipelining, superscalar architecture, and branch prediction logic are currently technological buzzwords in the computer community presently. These technologies can be found in newer chips. Pipelining allows the chip to seek out new data while the old data is still being worked on (Wyant and Hammerstrom, 161). Superscalar architecture allows complex instructions to be broken down into smaller ones and then processed simultaneously through separate pipelines (Wyant and Hammerstrom, 161-163). Branch prediction logic uses information about the way a program has behaved in the past to try to predict what the program will do next (Wyant and Hammerstrom, 165). Bus speed is simply the speed in MHz at which the data bus travels. This is relative to how fast the microprocessor can communicate with the rest of the computer. A register is the part of the chip that hold the information that the chip is currently manipulating. The width of the register in bits is relative to how much data the chip can process simultaneously. Using very long instruction words is simply using instructions larger than 16 bits to increase the amount of data the chip can be sent at once. A new tool being used in making chips run faster is to place a cache on the chip. This cache is for holding data that the chip is most likely to need first. Since the data is stored inside the chip the access time is lowered dramatically. In the future more and more of the computer will be integrated on the main processing unit. Line width is also a sign of the technological times. It is simply how small the smallest feature is on a chip. Basically the smaller the lines the more transistors can be squeezed onto the wafer and thus increase performance while cutting manufacturing costs. Non-technological issues also have a major effect on the microprocessing world. One such issue is heat. This may sound trivial but a Pentium chip can and will burn the skin of a person who touches one that has been running for longer than a few minutes. Without a fan most modern chips will melt and or destroy themselves. To combat this, large aluminum heat sinks are attached to the chips and a large fan is placed in the case. Some users prefer to use a separate fan above the heat sink for added insurance. Operating voltages can also add to the heat problem. Chips run from either 3.3 or 5 volts. Three and three tenths volts is preferred now because with less power less heat is generated and in the case of laptops battery life is extended. Credit for the invention of the microprocessor is given to Intel. This first microprocessor was the 4004 and was released in 1971. This single chip matched the performance of the room size computer ENIAC from the fifties (Wyant and Hammerstrom, 19). This chip could only support a four bit bus. These four bits only offered the possibility of coding 16 symbols (2^4=16). Sixteen symbols was enough for digits 1-9 and then some operators. This limited the 4004 to calculator usage. The 4004 ran at 108 kHz which is 1/10 of 1 MHz (Rosch, 66). The smallest feature on the chip measured 10 microns and contained 2300 transistors. The next generation of Intel chips used a 8 bit data bus. The first member of this generation was released in 1972 and was called the 8008. This chip was the same as the 4004, but it had 4 more bits on each register. This chip had enough bits to code 256 symbols (2^8=256). This number is easily enough to encode our alphabet, numerals, punctuation marks, etc. The 8008 also ran a little faster than the 4004 with its speedy clock of 200 kHz. The 8008 contained 3500 transistors and had line widths 10 microns. Both chips had a MIPS of 0.06 (Rosch, 66). The next member of the Intel family was born in 1974 and was called the 8080. This chipped was intended to handle byte sized data (8 bit). The 8080 contained 6000 transistors and had 6 micron technology. This chip performed at 0.65 MIPS and had an internal clock speed of 2 MHz. This was one of the first chips to have the capabilities of running a small computer (Rosch, 66). In June of 1978 the 8086 family was released by Intel. These chips used 16 bit registers. The fastest chip in this series ran at 10 MHz and could execute.75 MIPS. This chip forced engineers of the time to begin developing fully 16 bit devices, which were more expensive than their 8-bit brethren. Because of this, the 8086 family was considered ahead of it's time (Rosch, 67-68). A year later Intel introduced the 8080. This chip was a step backwards in chip evolution with it's 8 bit data bus. The 8080 could process.64 MIPS with it's 6000 transistors. The 8080 used 6 micron technology. This chip is worth mentioning primarily because IBM chose to use it in it's first personal computer. IBM was able to use the 8088 with existing 8 bit hardware, which was more cost effective. Later IBM began using the 8086 in it's newer systems (Rosch, 68). In 1982 Intel released the 80286. The 286 family was available in clock speeds of 8, 10, and 12 MHz that could execute 1.2, 1.5, and 1.66 MIPS respectively. The 80286 contained 134,000 transistors with 1.5 micron technology. These chips all used a 16 bit data bus and were used by IBM in it's AT models. This was also the first chip to use virtual memory, or using disk space as RAM (Random Access Memory). To allow full downward compatibility the 286 was designed to have two operating modes. These modes are real and protected mode. Real mode mimics the operation of an 8086. Protected mode allows multiple applications to be run simultaneously and not interfere with each other (Rosch, 70-71). The next member to the Intel family was added in November 1985 and was the 80386. These chips are offered in speeds of 16, 20, 25, 33 MHz and can process 5.5, 6.5, 8.5, and 11.4 MIPS respectively. The number of transistors in the 80386 is 275,000 with 1.5 micron technology. The 386 family doubled the register size to 32 bits. Also the 386 uses 16 bytes of prefetch cache that the chip uses to store the next few instructions. The 386 has three models which are called the 386DX, 386SX, and the 386SL. The 386DX was the original and most powerful. The 386SX is a more economical sibling to the DX. It is basically scaled down, less powerful DX. Also the SX uses a 16 bit data bus. The SL also uses 16 bit buses but it includes power saving features targeted at notebook usage. The SL uses 1.0 micron technology and contains 855,000 transistors (Rosch, 72-78). The 80486 family was introduced in April 1989 and became a "better 386" (Rosch, 78). The 486 was originally released in a DX model with speeds of 25, 33, and 50 MHz that processed 20, 27, and 41 MIPS respectively. The DX also contains a math coprocessor or floating point unit that helps speed up math operations. The 486DX uses a 32 bit bus and contains 1,200,000 transistors. It uses 1.0 micron technology in the 25 and 33 MHz models, but in the 50 MHz model uses 0.8. The next to be released was the 486SX. The SX was designed to cut cost at the price of not having a math coprocessor. As a result the SX will not perform as well as the DX in math intensive operations. The SX contains 1,185,000 transistors and uses the same technology as the DX. The SX is available in 16, 20, 25, and 33 MHz models that process 13, 16.5, 20, and 27 MIPS respectively. To add the power of a FPU (Floating Point Unit) to the SX Intel released the OverDrive upgrade processors in March 1992. The first, the 486DX2, incorporated clock doubling technology. These chips operate at double the bus speed. These chips are available in 50 and 66 MHz models that can process 41 and 54 MIPS respectively. The 50 MHz model was designed to replace the 25 MHz 486SX and the 66 MHz model was for the 33 MHz 486SX. The OverDrive chips contain 1.2 million transistors. The next to be released was the SL model which was, like the 386SL, targeted at laptop usage. The SL contains 1.4 million transistors and can process 15.4, 19, and 25 MIPS while running at 20, 25, and 33 MHz respectively. The 486DX4 was the next OverDrive chip to be released. It contains clock tripling technology. The DX4 can turn a 33 and 25 MHz 486's into DX4-100 and DX4-75 respectively. These chips can process 60 and 81 MIPS running at 75 and 100 MHz respectively. The DX4 uses 0.6 micron technology (Rosch 84-85). The next addition to the Intel family was the Pentium. The Pentium was originally released in a 60 MHz model that operated at 5 volts. This chip contains 3,100,000 transistors and can process 100 MIPS. The next to be released was the 66 MHz model. It uses the same technology but is a 3.3 volt chip and can process 112 MIPS. Currently the Pentium is available in 66, 75, 90, 100, 120, 133, 150, and 166 MHz models. Beyond the 75 all Pentiums use 0.6 micron technology. A 180 MHz is slated for future release. The Pentium family is, like all of Intel's chips, uses CISC technology. Also they use pipelining, superscalar architecture, and branch prediction logic. A Pentium OverDrive is also available for upgrading 486 systems to Pentium technology. The Pentium OverDrive is available in a 63 and 83 MHz version (Rosch, 85-87). After the Pentium, the only more advanced chip Intel has for personal use is the Pentium Pro. This chip has only been available for a short time and is targeted at workstation and server usage. It will only run Windows NT and native 32 bit software at an increased speed. When using 16 bit software, the less powerful Pentium will outperform its larger sibling. The Pentium Pro also contains 256K (256,000 bytes) of on chip cache memory. The only certainty in the future of microprocessors is constant improvement. One prediction for the future is called Moore's Law. This prediction is named after Intel cofounder Gordon Moore who presented it in 1965. The law states the transistor densities will double every two years. Line width is also continuing to shrink and is estimated to be at 0.2 microns by the turn of the century. When all is considered the future of computers is very exciting (Wyant and Hammerstrom, 184-185). Knorr, Eric. "From 586 to Pentium Pro: Choosing Your Dream PC." PC World February 1996: 133-142. Rosch, Winn L. The Hardware Bible. Indianapolis: SAMS, 1994. Wyant, Gregg, Hammerstrom, Tucker. Intel, How Microprocessors Work. Emeryville: Ziff-Davis, 1994. f:\12000 essays\technology & computers (295)\Microsoft .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ * Get More Information About Windows 95 * For more information about Microsoft Windows 95, take a look at Microsoft's WinNews file sections, which can be found on most major online services and networks. On the Internet use ftp or the World-Wide-Web (ftp://ftp.microsoft.com/PerOpSys/Win_News, http://www.microsoft.com). On The Microsoft Network, open Computers and Software\Software Companies\Microsoft\Windows 95\ WinNews. On CompuServe, type GO WINNEWS. On Prodigy JUMP WINNEWS. On America Online, use keyword WINNEWS. On GEnie, download files from the WinNews area under the Windows RTC. NEW SERVICE: To receive regular biweekly updates on the progress of Windows 95, subscribe to Microsoft's WinNews Electronic Newsletter. These updates are e-mailed directly to you, saving you the time and trouble of checking our WinNews servers for updates. To subscribe to the Electronic Newsletter, send Internet e-mail to enews@microsoft.nwnet.com with the words SUBSCRIBE WINNEWS as the only text in your message. f:\12000 essays\technology & computers (295)\Microsoft Access An Overview.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Executive Summary Microsoft Access 97 for the Windows 95 and Windows NT operating systems provides relational database power for your programs. Its visual design and event driven nature make Access a powerful and easy tool to learn. Access is quick and easy to use which makes it a popular tool with home users. Small business owners benefit greatly from Access because they can develop their own database applications and eliminate the cost of third party developers. Microsoft Access 97 makes it easy to turn data into answers and includes tools that help even first time users get up and running quickly. For example, the Database Wizard can automatically build custom databases in minutes. The Table Analyzer Wizard quickly transforms linear lists or spreadsheets into powerful relational databases. Access 97 offers greatly enhanced 32-bit performance with smaller forms, more efficient compilation, and better data manipulation technology for quicker queries and responses. Other features further improve execution time and help you build fast business solutions. The Performance Analyzer Wizard automatically recommends the best way to speed up your database. Additionally, Visual Basic for Applications and OLE make it simple to build quick solutions and integrate them with other Microsoft Office programs. Table of Contents WHAT IS ACCESS? 3 A BRIEF HISTORY OF ACCESS 4 ACCESS 1.X / ACCESS 2.0 4 ACCESS 95 / ACCESS 97 4 HARDWARE REQUIREMENTS 6 RAPID APPLICATION DEVELOPMENT 7 THE EVENT DRIVEN MODEL 7 VBA IN ACCESS 95 / 97 8 THE JET DATABASE ENGINE 10 WHERE IS ACCESS TYPICALLY USED? 11 ACCESS IN CORPORATE BUSINESSES 11 ACCESS IN SMALL BUSINESSES 13 ACCESS AT HOME 13 FEATURES OF ACCESS 97 15 INTEGRATING ACCESS WITH OTHER APPLICATIONS 17 MICROSOFT OFFICE 17 ACCESS AND VISUAL BASIC TOGETHER 18 CONCLUSION 20 GLOSSARY 20 ENDNOTES 23 BIBLIOGRAPHY 28 What is Access? Microsoft Access for Windows is a relational database management system. Access uses the graphical abilities of Windows so that you can easily view and work with your data in a convenient manner. Access makes your data available to you quickly and easily, and presents it in an effective and readable way. Its ability to locate information using query by example eliminates keystrokes and consequently speeds up the development process. Access lets you examine your data in a variety of ways. Sometimes the information in a record is easier to understand if the record's fields are arranged on a form or a report in a visually pleasing way; sometimes you need to see the maximum number of data records possible on your screen. A Microsoft Access form is a special window that is used for data entry. You can use the visual capabilities of Windows to create a custom form using a combination of graphics and text. Forms can present data in a format that is easier to read and understand than a data sheet. The form wizard is an internal tool that helps you create data entry forms by asking the user to answer a number of predefined questions. The wizard asks about how the data is to be displayed and then sets up the layout based on the responses. Overall, Access is a tool that allows users to create, edit, and maintain sophisticated databases. Users can accomplish this without programming skills. However, Access Basic provides programmers with additional abilities to automate and extend the functionality of their database programs. A Brief History of Access Access 1.x / Access 2.0 Access 1.x and 2.0 run under Windows 3.1. Access 1.0 debuted in 1993 and set the standard for databases in Windows. At this point, Access was a little too much for home use. The typical home PC user was not at the same level as Access was with respects to application development and understanding of relational data modeling. Access had only a few wizards and thus a long learning curve. Additionally, few home PCs had adequate RAM and processor speeds to accommodate Access. The arrival of Access 2.0 changed things quite a lot, offering more wizards and add-ins to supplement the package. Access 95 / Access 97 Access 95 and 97 are 32-bit applications which run under Windows 95 or Windows NT. Access 95 is also frequently referred to as Access 7.0. Microsoft has improved these products over previous version of Access so that they integrate better with other applications in the Microsoft Office suite. The 32-bit versions are also more heavily oriented towards the home user than previous versions. Access 95 introduced Visual Basic for Applications as a means of integration and compatibility. Users could use Access to define data structures and their relationships, then export the generated schema to VB. This gave Access 95 an edge over competing products such as Paradox and Delphi. Access 97 is a product with multiple personalities. On the surface, a quick tour of Access leads you to believe that it was created primarily for novice database programmers. Like a friendly personal assistant, Access helps to organize and store information by using features such as the following: · A well organized Database window · Wizards for constructing database objects · A variety of built-in properties to define each object · A simplified macro scripting language Below the surface lies a completely different infrastructure. Access 97 has the following additional abilities: · Automation allows Access to print reports from within Visual Basic or to edit Access 97 table data while inside an Excel worksheet. · The Visual Basic for Applications programming language gives you the building blocks for creating robust applications in Access 97 and for automating complex business processes. · The Access relational data model and Structured Query Language (SQL) foundation allow you to make uncomplicated representations of complex data. · The Jet Database Engine exposes programmable Data Access Objects that provide your program code direct access to database data and structures. Throughout the different versions of Access its user friendly interface has not changed much. It still makes designing a database look relatively easy, but it has become more flexible and powerful. Hardware Requirements Access is a resource hungry application. However, hardware requirements for a developers and end users are different. Be sure to note the actual as opposed to recommended requirements. What Hardware Does Your System Require? According to Microsoft documentation, the official minimum requirements to run Microsoft Access 7.0 for Windows 95 are as follows: · 386DX processor · Windows 95 or Windows NT 3.51 or later · 12 megabytes of RAM on a Windows 95 machine · 16 megabytes of RAM on a Windows NT machine · 14 to 42 megabytes hard-disk space, depending on whether you perform a Compact, Typical, or Custom installation · 3 1/2-inch high-density disk drive · VGA or higher resolution (SVGA 256-color recommended) · Pointing device Recommended specifications for a development machine are much higher because you will probably run other applications along with Microsoft Access. In addition to Microsoft's requirements, these are the recommended requirements : · A Pentium or Pentium Pro processor - 100 MHZ or faster · A fast ATA-2 or SCSI hard drive · At least 20 megabytes of RAM for Windows 95, and 24 megabytes for Windows NT. Increase this amount if you like to run multiple applications simultaneously. · A high-resolution monitor (larger is better) and SVGA graphics The bottom line for hardware requirements is that the more you have, the better off you are. The increased speed and performance will make you much happier when you use Access or any other large, powerful program. Rapid Application Development Visual application design tools such as Access, Delphi, Visual Basic, and Oracle Forms allow the user to begin program development with the user interface. This approach is radically different than traditional program development where the user interface is typically designed last. Access allows the programmer to draw the individual components of the program on the screen, then link code to each object on that form. The programmer creates the interface much like he would use a paint program. Different objects and painting tools are selected from tool bars and applied to the form with a click of the mouse. This process is commonly referred to as Rapid Application Development, or RAD. RAD allows the programmer to develop applications quickly with very little turn around time and a minimal amount of coding. What RAD eliminates is duplication of effort. GUI design elements common to all Windows applications do not have to be recreated for each program. The Event Driven Model Event driven programming is a concept which goes hand in hand with RAD tools. Access and almost all competing RAD products fully support the event driven model. Traditional programs have a well defined flow of control. They execute sequentially from beginning to end. On the other hand, event driven programs do not have a logical beginning or ending point. The program will actually do nothing - until an event occurs. Once an event occurs, then the program will respond accordingly, depending on the type of event. Some examples of events include an application being run, mouse clicks, mouse movements, and keystrokes. Unknown to the user, Windows traps events and notifies the application behind the scenes. Access traps these notifications, called messages, and allows the programmer to design his program around those events. For example, a double-click event on an OK button could initiate a database query or anything else the programmer desires. VBA in Access 95 / 97 Visual Basic for Applications is the development language for Microsoft Access 95. It provides a consistent language for application development within the Microsoft Office suite. The core language, its constructs, and the environment are the same within Microsoft Access for Windows 95, Microsoft Visual Basic, Microsoft Excel, and Microsoft Project. The early versions of Access used a coding engine called Access Basic or EB (Embedded Basic). It had some similarities with its other siblings like VBA, Excel and Project. However, a major difference is that Access Basic was written in assembler language and VBA was written entirely in C. Microsoft was highly motivated to implement one common Basic engine for all of its development applications. The benefits of this standardization to the developer are: · Reduced learning curve. Microsoft is distributing Basic more widely each year, adding it to everything from the entire Office suite to its Internet browsers and servers. As a solution developer, you can now learn one rendition of the Basic language and one development interface, then carry your skills and experience with Access VBA into your work with other VBA host products. · Code portability. One of the current developer buzzwords is reusable objects, a term that describes self-contained servers (or something that provides services to an application). In order for a code procedure to qualify as a reusable object, you must be able to carry code from one host application into another to use it unmodified. VBA provides this capability. · Shared resources. By sharing a centralized coding and run-time environment, multiple tools and applications on your machine share the same dynamic link libraries and type libraries. The performance of your workstation improves when you have fewer resources loaded to memory, and this speeds up your development efforts. Disk space consumption, application deployment efforts, and version control issues are all favorably impacted when multiple applications on your machine share central services. Simple Access applications can be written using macros. Although macros are great for quick prototyping and very basic application development, most serious Access development is done using the VBA language. Unlike macros, VBA provides the ability to: · Work with complex logic structures · Utilize constants and variables · Take advantage of functions and actions not available in macros · Loop through and perform actions on table rows · Perform transaction processing · Programmatically create and work with database objects · Implement error handling · Create libraries of user-defined functions · Call Windows API functions · Perform complex DDE and OLE automation commands The Jet Database Engine Microsoft Access 97 ships with the Microsoft Jet database engine. This is the same engine that ships with Visual Basic and with Microsoft Office. Microsoft Jet is a 32-bit, multithreaded database engine that is optimized for decision-support applications and is an excellent workgroup engine. Microsoft Jet has advanced capabilities that have typically been unavailable on desktop databases. These include: · Access to different data sources. Microsoft Jet provides transparent access, via industry standard ODBC drivers, to over 170 different data formats. These formats include dBASE, Paradox, Oracle, Microsoft SQL Server, and IBM DB2. Developers can build applications in which users read and update data simultaneously in virtually any data format. · Engine-level referential integrity and data validation. Microsoft Jet has built-in support for primary and foreign keys, database specific rules, and cascading updates and deletes. This means that a developer is freed from having to create rules using procedural code to implement data integrity. The engine itself consistently enforces these rules, so they are available to all application programs. · Advanced workgroup security features. Microsoft Jet stores user and group accounts in a separate database, typically located on the network. Object permissions for database objects are stored in each database. By separating account information from permission information, Microsoft Jet makes it much easier for system administrators to manage one set of accounts for all databases on a network. · Updateable dynasets. As opposed to many database engines that return query results in temporary views or snapshots, Microsoft Jet returns a dynaset that automatically propagates any changes users make back to the original tables. This means that the results of a query, even those based on multiple tables, can be treated as tables themselves. Queries can even be based on other queries. Where is Access Typically Used? Access in Corporate Businesses Many midsize and large companies rely heavily on Access, but none rely exclusively on Access. Companies of any significant size usually have complex data needs, multiple database platforms and dozens to thousands of application users. In such an environment, no single product is sufficient to satisfy all needs. Access becomes one piece of an often complex puzzle of application development tools. Virtually all technology companies with more than one hundred employees have some in-house development staff. These departments are usually called Information Systems (IS) or Information Technology (IT). Corporations with changing technology have the challenge of efficiently retraining their application development staff. Access wins big in such a circumstance for two main reasons. First, Access has a reasonable learning and implementation cycle. It is neither the easiest nor the hardest development tool to learn. There are enough books, videos, courses, and conferences built around Access that companies can shop competitively and select the best staff retraining option they can find. There are also thousands of consultants and contractors that can help the IT staff make the transition to Access without wandering in the dark. Secondly, Access is flexible. Access fits well into corporate development models because it can be extended in the following ways: · Access coexists with other applications. Companies using Excel or Word find Access easy to add to existing desktops. Users are comfortable with the Office style user interface, appreciate the built-in data links between each of the products, and enjoy features like drag-and-drop. The IT staff can use Automation to add extra capabilities to the exchange of information between these products. · Access connects to existing data. Using ODBC technology and ISAM drivers, Access can import or link to text file data, spreadsheet data, Xbase data, Paradox data, Web pages, and SQL based data residing on platforms ranging from PC servers to mainframes. Companies can continue to use data stored in non-Access formats and easily convert such data to native Access data when required. · Access uses Basic. Many IT programmers have been writing in some dialect of Basic for years and find the transition to programming in Access only slightly challenging. Also, where Visual Basic is already part of an IT department's tool set, Access fits in well due to its many similarities to and compatibility with VB. Access in Small Businesses Access is best suited for small businesses. Microsoft had this market in mind when they started creating wizards in the Office product line. Because this market is comprised of people short on both time and money, they will not use Access if it cannot solve their problems quickly and cost effectively. Many small business owners and managers use Access themselves as a productivity and decision support tool. Small businesses frequently have only a few computer literate employees on staff, so the ability of Access to manage a few dozen simultaneous users is quite adequate. Business owners on a tight budget find that they can learn enough about Access to produce a simple but effective custom application with a few weeks of training and a few more weeks of development time. Of course, a very small businesses may not even need Access for the application development power it provides. Even without an application and its forms, you can be productive with Access by entering data into table datasheets, running summary queries, exporting data to Excel for analysis, and printing reports. Access at Home Four years ago, if you thought using Access 1.x at home was like using a sledgehammer to swat a fly, you were correct. At that time, home PC users lacked sophistication and most could not grasp the relational data model. Access had only a few wizards and a long learning curve. Few home PCs had the 16 megabytes of memory and 486 or Pentium processors that Access demands. The current home marketplace is quite different. The explosion of multimedia PCs has given many home PC users more than enough power to run Access. Microsoft Office Professional, of which Access is a part, is convenient for home users who want to use the same software at home that they have already learned to use at work. If you use or intend to use Access at home, you most likely fit into one of two categories: · you are a business user bringing Access work home · you are a home user, who knows Access through your job, and you want your home machine to resemble your work machine. It is natural to reason that if Access can manage your business data, it can certainly handle your personal data as well. When you create a new database in Access 97 you can select a template for the Database Wizard to meet your specific purpose. Some of these database templates, such as Book Collection, Donations, and Household Inventory, are quite obviously designed for home PC users. Features of Access 97 · Database Wizard. This can help you create a database to manage home data using a standard template. The resulting application can then be modified. · Table Wizard. This steps you through the process of creating commonly used tables and relationships. · Form Wizard. This tool saves time by removing most of the tedious form layout work. · Assistant. The Assistant character answers simple help requests and is designed to help new users feel less intimidated by the product. · Import Wizards. Many home users keep their records in products that produce spreadsheet or text format files. The Import Wizards help you load such data into Access. · Easy queries. Access 97 has a powerful SQL based query engine, but provides home users with layers of usability features (query wizards, sortable datasheets, query filters, and the like) on top of that engine. This enables users to easily ask everyday personal questions like "What is the oldest bottle of wine in my collection?" · Macros. Home users often prefer to use macro scripts rather than program in Basic. · Add-ins. As more copies of Access enter the home market, third parties will produce additional tools and wizards appropriate for home users. · Export to Word. Historically, home PC users spend more time in their word processor than their database software. Access makes copying and merging data to Word easy. · Publish to the Web. If you maintain a home page on the Internet you can use the new Internet data publishing features to help translate data into HTML. · Multi-user. Access makes data available to workgroups of multiple users by providing built-in record locking. This is available in forms and table datasheets without any programming. · Visual Basic for Applications. Access is highly programmable because its VBA language provides the ability to write custom procedures and because it provides event notifications that can be detected from code. Also, existing code from Visual Basic or Excel VBA libraries can be easily ported to Access VBA code libraries. · Forms. IT groups can create complex entry/edit forms which provide selective access to records, validation of data, query-by-form capabilities, and spell checking. · Reports. Corporate managers make many of their daily decisions from reports. Access lets them use graphical reports and can filter the reports using queries and parameters. Reports can also be connected to linked external data. · SQL. Most IT programmers have been exposed to SQL while working on minicomputer or mainframe databases. They can quickly grasp the query capabilities of Access. · Intranet capabilities. Access applications can provide users with links to Web pages on a corporate intranet through hyperlinks on form controls and in table fields. · Interoperability. Features like Automation from Access to Excel and Word or the new Publish to the Web Wizard give users flexibility when they publish and report company data. Interoperability is covered in more depth in the following section. Integrating Access with Other Applications Microsoft Office Access is an excellent tool for multifaceted solutions that involve integration with other Microsoft applications. Access 97 communicates better than ever with its siblings in Microsoft Office because of the following features: · Drag-and-drop. You can drag-and-drop form data, cells from a table datasheet, and entire table and query objects into Excel worksheets and Word documents. Conversely, you can drag-and-drop Excel cells into Access to create a new table. You can also drop Access objects onto the Windows desktop to create shortcuts to databases. · Save as Rich Text Format. You can save the output of a table datasheet, a form, or a report as a Rich Text Format (RTF) file that can be loaded into Word with the formatting preserved. · Mail Merge Wizard. Using this wizard you can link a Word mail merge document to data in Access and retrieve the latest data from Access whenever you print your Word merge document. · Save as an Excel worksheet. You can save the output of a table datasheet, a form, or a report as an Excel file with the formatting preserved. · Excel AccessLinks. The AccessLinks add-in program in Excel lets you create Access forms and reports using data in Excel and export data from Excel into Access tables. · E-mail attachments. Using the SendObject macro action or File Send... menu selection, you can attach an Access datasheet, form, report, or module to an e-mail message as a Rich Text Format file, an Excel worksheet, or a text file. · Common interface elements. The new Office 97 Assistant and Command Bar features provide a common set of user interface construction tools. Access and Visual Basic Together With Access 2.0, a significant two-way migration of developers occurred. Many Access developers realized that the investment they had made in learning Access Basic enabled them to learn Visual Basic more easily and added another powerful product to their skill set. From the other direction, most Visual Basic programmers adopted the Jet Database Engine as their preferred file-server database technology and adopted Access to create their database structures, queries, and reports. Thus, many Access developers became Visual Basic developers, and the reverse. This trend will only accelerate with the 97 versions of these products. The following three key areas help illustrate this point: · Visual Basic for Applications. Both Visual Basic 5 and Access 97 utilize the same programming language engine. Program code developed in either environment can be easily ported to the other. The benefits include the following: · You can create one common code library with procedures that work in both environments. · Developers can be trained in one language and use it in multiple products, including Access 97, Excel 97, Project 97, PowerPoint 97, Visual Basic 5, and Word 97. · You can quickly prototype applications destined for Visual Basic 5 in Access 97 using the Table and Form Wizards and some simple navigation code, then preserve any VBA code when moving it over to VB 5. · Automation. The OLE communication wire between Access 97 and Visual Basic 5 runs in both directions: · You can use Visual Basic 5 to drive Access 97 as an Automation server for editing table data or printing database reports from within a VB 5 application. · You can create applications in VB 5 that are specifically designed to be OLE servers to Access 97, enhancing the capabilities of Access 97 while providing the faster performance of a compiled application. · You can build ActiveX controls in Visual Basic or Visual C++, or buy them and use the same control and code to extend both Access 97 and VB 5. Both products are host containers for ActiveX controls (OCX files). · Jet Database Engine. Visual Basic 5 makes even broader use of Jet through the same Data Access Objects coding language as Access 97 uses. More and more developers will create multifaceted solutions that use both Access 97 and VB 5 with the same back-end database in Jet. Conclusion Microsoft uses continuous user-driven research programs to gain insight on how customers use Microsoft Access and how it could be improved. Based on extensive research, Microsoft has designed Access for Windows 95 around the following design goals: · Make it easier for people to get their work done using a database · Strengthen integration with Microsoft Office applications · Provide greater flexibility to a broad range of computer users · Make it easier for developers to create custom database solutions The result is that Microsoft Access for Windows 95 is the easiest to use and most integrated desktop database available. It includes innovative technologies that provide all types of users with compelling reasons to make Microsoft Access a standard part of their business computing desktops. Glossary ActiveX Microsoft's answer to Java. ActiveX is a stripped down implementation of OLE designed to run over slow Internet links. API Application Program Interface. The interface (calling conventions) by which an application program accesses operating system and other services. An API is defined at source code level and provides a level of abstraction between the application and the kernel (or other privileged utilities) to ensure the portability of the code. DDE Dynamic Data Exchange. A Microsoft Windows 3 hotlink protocol that allows application programs to communicate using a client-server model. Whenever the server (or "publisher") modifies part of a document which is being shared via DDE, one or more clients ("subscribers") are informed and include the modification in the copy of the data on which they are working. DLL Dynamically Linked Library. A library which is linked to application programs when they are loaded or run rather than as the final phase of compilation. This means that the same block of library code can be shared between several tasks rather than each task containing copies of the routines it uses. GUI Graphical User Interface. HTML Hyper Text Markup Language. ISAM Indexed Sequential Access Method. File access method supporting both sequential and indexed access. IT Information Technology. OCX OLE custom controls. An Object Linking and Embedding (OLE) custom control allowing infinite extension of the Microsoft Access control set. ODBC Open DataBase Connectivity. A standard for accessing different database systems. There are interfaces for Visual Basic, Visual C++, SQL and the ODBC driver pack contains drivers for the Access, Paradox, dBase, Text, Excel and Btrieve databases. OLE Object Linking and Embedding. A distributed object system and protocol from Microsoft. RAD Rapid Application Development RTF Rich Text Format. An interchange format from Microsoft for exchange of documents between Word and ot f:\12000 essays\technology & computers (295)\Microsoft.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ TABLE OF CONTENTS MICROSOFT HISTORY 1 EARLY INFLUENCES 2 FIRST BUSINESS VENTURE 3 EDUCATION ATTEMPT 3 THE MOTIVATIONAL SIDE OF FEAR 4 A JAPANESE CONNECTION 5 IBM INFLUENCE 5 SURVIVAL OF THE FITTEST 6 A CRUCIAL DEAL 6 COMPETITION ERRORS 7 BIRTH OF WINDOWS 7 MISSION STATEMENT AND ANALYSIS 8 INDUSTRY AND COMPETITVE ANALYSIS 9 DOMINANT ECONOMIC CHARACTERISTICS 9 Market Differentiation 9 Pace of technological change 10 Advances to the Printed Word 11 DRIVING FORCES 12 The Internet 13 The Information Highway 14 KEY SUCESS FACTORS 14 PORTERS 5 FORCES MODEL 15 INDUSTY STRATEGIES 16 COMPANY SITUATION ANALYSIS 16 SWOT ANALYSIS 17 Strengths 17 Weaknesses 18 Opportunities 20 Threats 21 STRATEGIC ISSUES 23 STRATEGY AND SITUATION 23 STRATEGIC FIT 24 FINANCIAL ANALYSIS 24 OEM REVENUES 24 WINDOWS 957 RETAIL UPGRADE 25 DESKTOP APPLICATIONS AND OTHER PRODUCTS 25 COST OF REVENUES 25 OPERATING EXPENSES 26 BALANCE SHEET 26 PLATFORMS PRODUCT GROUP 31 APPLICATIONS AND CONTENT PRODUCT GROUP 32 SALES AND SUPPORT GROUP 32 EARNINGS AND FINANCES 34 FINANCIAL RATIOS 36 ANNUAL RATIOS 36 COMPARISON TO INDUSTRY 37 STOCK PRICE COMPARISONS 38 DIVERSIFICATION 41 ANALYSIS 42 Microsoft Corporation Microsoft History Historians categorize blocks of time with the discovery of certain raw materials that humans utilized. The Bronze Age and the Iron Age were two periods in human history that proved through the discovery of artifacts that humans learned to harness these raw materials ingeniously. The Industrial Revolution of the late nineteenth century brought the discoveries of the Bronze and Iron Ages to new heights, and the advent of the locomotive, automobiles, cargo ships and airplanes were the most evident by-products of such raw materials. Use of these by-products from the earth=s raw materials dramatically changed the world of business and trade. With the subsequent invention of wire communications (i.e., tapping out Morse code and speaking over telephone lines), business and trade grew exponentially. Wireless communications via the inventions of radio, television, and motion pictures contributed greatly to the advances of the Industrial Revolution. The need to find better ways of doing business to keep the marketplace fresh and innovative has driven the human race toward the brink of a new eraCthe Information Age. Unlike more tangible qualities of prior ages, the Information Age offers less defined qualities. At the heart of this new age is the advent of the personal home computer. Pumping life into this otherwise material home appliance is software that incorporates the necessary commands to access information stored within the computer=s memory. The company that offered the world its first software manufacturing company was Microsoft Corporation (MSFT on the NASDAQ exchange). At the helm of this young, innovative company are William Gates and Paul Allen, a pair of former high school chums who envisioned a world of home computer technology years before such a dream became even remotely possible. Early Influences Their story begins at Lakeside High, a private high school in Seattle, Washington. The Mothers= Club at Lakeside decided to purchase a computer terminal for the kids with proceeds from bake sales and rummage sales. Students at Lakeside became enthralled with this new toy. True to their innate curiosity, Gates and Allen began to dabble farther into the workings of the computer; Gates, for example, wrote his first computer program at the age of thirteenCa version of Tic, Tac, Toe. Because the computer terminal was so slow, one game of Tic, Tac, Toe took up most of a lunch break; if played on paper, a full 30 seconds might have been required. Despite the simplicity of the program, it spawned the creative genius in both young men to tackle more challenging programs in the years ahead. Because the Mothers= Club was unable to afford continued use of computer time at $40 per hour, they decided to make it students= responsibility to purchase their own computer time. Most students complied by getting jobs outside school. Gates and Allen became programmers in the summers for compensation of computer time and $5000 in cash. In his 1995 book The Road Ahead, Gates describes the mainframe computers of the early >70=s as A. . . temperamental monsters that resided in climate-controlled cocoons . . . connected by phone lines to clackety teletype terminals. . . .@ (11) He went on to explain that a personal home computer called the DPD-8 was actually available from Digital Equipment Corporation. According to Gates it was A. . . an $18,000 personal computer which occupied a rack two feet square and six feet high and had about as much computing capacity as a wristwatch does today . . . Despite its limitations, it inspired us to indulge in the dream that one day millions of individuals could possess their own computers.@ (11-12) In the summer of 1973, Paul Allen, who knew more about computer hardware than Bill Gates, shared an article with Gates buried on page 143 in Electronics Magazine. The article described the invention of the 8008 micro-processor chip by a young company called Intel. Paul was surprised to receive the technical manual for the chip in the mail simply upon request. Immediately, he went to work analyzing its capabilities. Due to the lack of transistors, the 8008 chip was very limited in its use, but Allen discovered despite the limitations, the chip was good for repetitive tasks and mathematical data. First Business Venture When Paul Allen entered college at Pullman, Washington, a town on the east side of the state, sixteen-year-old Bill Gates traveled frequently by bus to visit him. On these long trips across the state, Gates wrote a program that facilitated the reading of traffic information gathered by municipalities through a device set up on the side of certain intersections. A long, rubber tube stretched across the road from one of these devices, and each time a vehicle ran over the tube a punch was made in the roll of paper within the device. People deciphered this crude data by visually inspecting the punch holes and annotating the results. Gates= program relieved humans from such a tedious task, using the technology of the 8008 chip instead. With this program Gates and Allen launched their first company, Traf-O-Data. The two programmers were full of enthusiasm for the success of their new company; most communities, however, were reluctant to purchase from two kids: consequently, their fledgling company enjoyed only marginal sales. Education Attempt Gates attended Harvard College in 1973 while Allen secured a job in Boston, Massachusetts as a programmer for Honeywell. In 1974 Intel announced the advent of the 8080 chip that boasted 2,700 more tran-sistors than its predecessor. Because of the disappointment they experienced in the hardware side of computing through dismal success in Traf-O-Data, Gates and Allen focused on new opportunities in the software side of computers. With a vision of millions of computers owned by individuals, the pair banked on competition between Japanese and American companies for control of the computer hardware market. With this in mind, and with the introduction of the 8080 microprocessor chip (and inevitable successors to the chip), Gates and Allen determined that their future lay in developing software for these computers. The Motivational Side of Fear During a cold, New England morning outside a newsstand in Harvard Square during one of his frequent visits to Bill Gates, Paul Allen picked up a copy of the January issue of Popular Electronics magazine. The cover photo pictured a small computer kit called the Altair 8800. It sold for a mere $397, and had 4,000 characters of memory . Panic struck Gates: A>Oh no! It=s happening without us! People are going to go write real software for this chip.= I was sure it would happen sooner than later, and I wanted to be involved form the beginning. The chance to get in on the first stages of the PC revolution seemed the opportunity of a lifetime, and I seized it.@ (Gates, 16). Driven by fear of someone writing software for the Altair 8800 personal computer before his own software was complete, Gates scrambled feverishly in his Harvard College dormitory forgoing a decent night=s rest. Five weeks later, a version of BASIC became the impetus for Athe world=s first microcomputer software company . . . In time we named it >Microsoft.=@ (Gates, 17) In the spring of 1975, Allen quit his job with Honeywell; Gates decided to take an indefinite leave of absence from college (never intending to forgo a degree). Both young men planned to dive into the world of the computer software business at its very beginning stages. Allen was twenty-two years young and Gates was only nineteen. They set up operations in Albuquerque, New Mexico because the city was home to MITS, creator of the first inexpensive personal computer to be offered to the general pubicCthe Altair 8800 . Microsoft provided BASIC language because it allowed a format for computer users to write their own programs instead of having to rely on scarce, packaged software. Immediately, the MITS Altair 8800 faced strong competition from computer makers such as Apple, Commodore, and Radio Shack who entered the personal computer market in 1977. The strategy at Microsoft was to convince computer manufacturers to buy licenses to Abundle@ Microsoft software with their computers. Royalties would then be paid to Microsoft on each computer sale. Aside from the antics of early software piraters and lack of government laws preventing such activities, this strategy of selling licenses for the use of their software worked well for Microsoft. A Japanese Connection By 1979 half of Microsoft=s business came from Japan. This was due in large part to Asweat equity@ of one man in particular. His name is Kazuhito (Kay) Nishi. Kay telephoned Gates in 1978 after discovering Microsoft in a newspaper article. Both Gates and Nishi were only twenty-two at the time and shared many similarities despite cultural and language differences. They met shortly after the phone call at an electronics con-vention in southern California. Without attorneys, they signed a 12 page contract which gave Nishi exclusive distribution rights to Microsoft=s BASIC language in East Asia. Eventually, their original expectation of $15 million was realized ten-fold through sales as a result of that contract. Microsoft moved from Albuquerque, New Mexico to its present home in Redmond, Washington in 1979 with most of its twelve employees. According to Gates, the mission of Microsoft was Ato write and supply software for most personal computers without getting directly involved in making or selling computer hardware.@ (44) The programming team adapted programs to each machine and were Avery responsive to all the hardware manufacturers . . . we wanted choosing Microsoft software to be a no brainer . . . along the way, Microsoft BASIC became an industry standard.,@ Gates was quoted. (44) IBM Influence By 1980, International Business Machines (IBM) enjoyed an 80% market share of large computer hardware, but only marginal success with the smaller personal computer (PC) market. The Apple II computer appeared poised to takle the business market, thanks in part to a popular spreadsheet program called VisiCalc. Based on Apple=s success, IBM decided to enter the PC market. In the summer of 1980, two emissaries from IBM met with Gates to discuss IBM=s plans for a full-market assault, with components already available off-the-shelf. IBM=s plan was to utilize Intel=s microprocessor chip and to use Microsoft=s programming expertise, rather than create its own software. As a result of this meeting, Microsoft hired Tim Paterson, from a Seattle, Washington firm, who became responsible for creating the Disc Operating System (DOS) for IBM compatible computers. Survival of the Fittest The first IBM PCs hit the market in August of 1981 with a choice of three operating systems: Microsoft=s DOS, UCSD-Pascal, and CP/M86. Gates realized that only one operating system could survive, just as only one video cassette recorder survived their market previously (VHS beat out Beta Max). Gates developed a three-part plan to come out on top of the competition: < make Microsoft DOS the best product of the three < help other software companies write MS-DOS based software < ensure MS-DOS to be inexpensive. A Crucial Deal With these objectives in mind, Gates offered IBM an attractive deal. Microsoft would allow IBM to use DOS (called IBM- or PC-DOS to distinguish itself from the nearly identical MS-DOS) for a low one-time fee for as many PC=s IBM could sell. This deal gave IBM the incentive to push DOS, rather than the other two oper-ating systems, whose manufacturers received royalties for each PC sale with their respective operating systems installed. Hence, IBM sold UCSD Pascal P-system for $450 and CP/M-86 for $175 while DOS was offered at only $60. Gates=s strategy worked as he stated: AOur goal was not to make money directly from IBM, but to profit from licensing MS-DOS to computer companies that wanted to offer machines more or less compatible with the IBM PC. IBM could use our software for free, but it did not have an exclusive license or control of future enhancements. This put Microsoft in the business of licensing a software platform to the PC industry. AConsumers bought the IBM PC with confidence . . each new customer . . . added to the IBM PC=s strength as a potential de facto standard for the industry. . . . A. . . the availability of software and hardware add-ons sold PCs at a far greater rate than IBM had antici-patedCby a factor of millions,@ which meant Abillions of dollars for IBM.@ (Gates, 49-50) Competition Errors After three years of competition blitzing, all competing standards for personal computers had disap-peared with the exception of Apple=s Apple II and Macintosh. AHewlett Packard, DEC, Texas Instruments, and Xerox, despite their technologies, reputations, and customer bases, failed in the PC market in the early 1980s because their machines weren=t compatible and didn=t offer significant enough improvements over the IBM architecture.@ (Gates 50) Only Commodore Corporation fared well through the eighties in the PC market, due substantially to lower cost of models 64 and 128, and the superb graphics of the Commodore Amiga, still used today by some commercial movie studios. Gates defends IBM against certain revisionist historians who conclude A. . . IBM made a mistake working with Intel and Microsoft to create its PC. They argue that IBM should have kept the PC architecture proprietary, and that Intel and Microsoft somehow got the better of IBM. But the revisionists are missing the point. IBM became the central force in the PC industry precisely because it was able to harness an incredible amount of innovative talent and entrepreneurial energy and use it to promote its open architecture. IBM set the standards.@ (Gates, 50) Birth of Windows Because of the character-based commands that users of DOS needed to type into the computer from a keyboard peripheral, Gates saw the potential of losing Microsoft=s leading software position if it stayed with the MS-DOS format. Researchers at Xerox=s Palo Alto, CA Research Center studied human-computer interaction and found that computer users could more easily instruct the computer if users were allowed to point to commands, via a device called a Amouse,@ as opposed to typing commands, via a QWERTY keyboard. According to Gates, AXerox did a poor job of taking commercial advantage of this groundbreaking idea, because its machines were expensive and didn=t use standard microprocessors. Getting great research to translate into products that sell is still a big problem for many companies.@ (53) The process of using picturesCiconsCto command a computer, rather than typed characters, is called graphical technology. The screen which molds graphical technology into the character-based operating system format is called a Graphical User Interface (GUI). In 1983, Microsoft announced its version of a GUI called Windows7. The Apple Lisa and Xerox Star were GUIs already available to consumers, but both, in Gates= view, A. . . were expensive, limited in capability, and built on proprietary hardware architectures.@ (53) This meant that other hardware companies could not license the operating systems to build compatible systems. The same was true for software companies, and this hindered the creation of new applications for the Star and Lisa GUIs by outside companies. MISSION STATEMENT AND ANALYSIS At Microsoft, our long held vision of a computer on every desk and in every home continues to be the core of everything we do. We are committed to the belief that software is the tool that empowers people both at work and at home. Since our company was founded in 1975, our charter has been to deliver on this vision of the power of personal computing. As the world's leading software provider, we strive to continually produce innovative products that meet the evolving needs of our customers. Our ectensive commitment to research and development is coupled with dedicated responsiveness to customer feedback. This allows us to explore future technological advancements, while assuring that our customers today receive the highest quality software products. A good mission statement attempts to answer some key questions about the company and the industry. These questions are Who are we?, What business are we in?, and Where are we headed? In Microsoft's mission statement they tell who they are, as well as what there business is. They stess their goals and where they are headed very well. My biggest problem with this mission statement is the fact that Microsoft is to worried about being on top and will do what ever is necessary. INDUSTRY AND COMPETITVE ANALYSIS Dominant Economic Characteristics Market Differentiation The first popular graphical platform came to market in 1984 with Apple=s Macintosh. It was an instant success as the GUI platform of Macintosh eliminated the need for obscure character commands. Gates worked closely with Steve Jobs, who was the leader of the Macintosh team, in order to create Microsoft=s competing GUI version of the Mac called Windows. The major difference that Microsoft held over Apple was its willingness to allow other software developers open access to the Windows format. Apple restricted its GUI to Macintosh computers only. That difference helped to elevate Microsoft eventually to the software industry leaderCbar none. Gates devotes pages of explanations of why such a Agreat company@ as IBM failed in its attempts to finally create its own software operating system. He apologetically cites the specific decisions that IBM made with the development of its OS/2 operating system. His reason for the disappointing results of IBM=s attempts are chiefly due to the fact that graphical computing could have found mainstream success if IBM had been more cooperation with Microsoft in developing a general application of GUI software to be used with existing hardware rather than insisting on developing a whole new application. When Microsoft went public in 1986, Gates offered IBM 30% of MSFT stock in order that IBM could share in the fortune, be it good or bad, of Microsoft. IBM declined. This was Microsoft=s attempt at keeping IBM close to Microsoft as IBM was instrumental in the success of Microsoft. Despite not seeing eye to eye with IBM in the development of Windows, Gates saw the GUI application as the progressive alternative to DOS and continued to create improvements on the existing applications. In the weeks prior to the release of Windows 3.17, May 1990, Gates A. . . tried to reach an agreement with IBM for it to license Windows to use on its personal computers. We told IBM we thought that although OS/2 would work out over time, for the moment Windows was going to be a success and OS/2 would find its niche slowly.@ (62) IBM again refused to cooperate with Microsoft insisting total dedication to the development of OS/2 which was eventually doomed to an ignominious future. AIBM has proven conclusively through the years that it has no idea of how to create or market software. Examples are Displaywrite word processing; the PC Jr, IBM Personal Typing System, and the PS-1, all with proprietary software; OS/2as mentioned above, and feeble attempts at networking. Now, with the purchase of Lotus, the software giant should request last rites.@] According to Gates, AIf IBM and Microsoft had found a way to work together, thousands of people-yearsCthe best years of some of the best employees at both companiesCwould not have been wasted. If OS/2 and Windows had been compatible, graphical computing would have become mainstream years sooner.@ (62) Pace of technological change In its twentieth fiscal year (July 1BJune 30) since incorporation, Microsoft leads the software industry with revenues of $5,937,000,000 as of June 30, 1995 . It is the unequaled standard bearer for software manufactures and with its release of Windows 957, a total graphical operating system, should remain at the top for years to come. Despite its current position, Microsoft is still faced with new challenges as with the progression of any high-tech industry. The most recent challenges facing Microsoft are its applications to the Internet and its commitment to the development of the information super highway. In 1989 the U.S. Government decided to cease funding its 1960s project ARPANET and allow the project to be succeeded by the commercial equivalent AInternet.@ In its beginning stages, the Internet picked up where ARPANET left off. Its primary function was to provide electronic communications, or e-mail, solely between computer science projects and engineering projects. Its popularity increased as it became commercially available to PC users. To fully appreciate the significance of e-mail and the transmission of electronic data consider the evolution of the printed language. Advances to the Printed Word When Johann Gutenberg introduced the printing press to Europe in 1450, the method of copying the printed word was revolutionized. Before the advent of the printing press there was an estimated 30,000 books available on the earth, most were hand written by monks. Although it took two years to complete the movable type for Gutenberg=s Bible, once completed, multiple copies could be made rather quickly. Almost 500 years later, Chester Carlson, frustrated by the length of time involved in preparing patent applications, set out to invent an easier way to duplicate information in small quantities. What resulted was a process he called Axerography@ when he patented it in 1940. In 1959, Carlson aligned with Xerox Corporation as a means of manufacturing and distributing AXerox@ copying machines. Xerox projected sales of perhaps 3000 units. Much to their surprise, they placed orders for 200,000 units, and one year later reported nearly 50 million copies a month were being processed. By 1986, that figure increased to 200 billion copies per month and has steadily increased ever since. The advent of xerography allowed small groups to participate in the capabilities of a printing press for a fraction of the cost and in a fraction of the time a conventional printer would take. The market size for the computer industry is very large, this past year it totaled $238.7 billion dollars. It is expected to rise considerably in the next few years. The competitive scope for the computer industry globally is very strong, microsoft is worldwide. The Japenese are very big competitors, but Microsoft is to powerful to compete with. Ease of entry is very hard, the computer industry is a costly industry to enter. To compete with large companies you would need millions of dollars to even consider getting started. One could start a small computer business focusing on one area without the cost being overly expensive. An example would be if you wanted to focus one the accounting industry you need not worry about anything else. The life of the product depends totally on your needs, as well as the increases in technolgy. Microsoft comes out with new products all the time, but you don't necessarly need to buy them. Sometimes a computer program can lasts companies for years. It is very difficult to enter the computer industry due to the large capital requirements and the rapid technological changes, so either backward or forward intergration would be very difficult. Driving Forces There are several driving forces in the computer industry. 1) Increased efficiency due to economies of scale 2) Change in the industry growth rate 3) Product innovation due to the rapid increases in technological advancements 4) The need to be the first to develop the new program The newest driving force for the computer industry was the internet or super highway. The following describes both along with the advantages they brought. The Internet The Internet offers even more advantages than Xeroxed copiers where information can be accessed and/or distributed to all interested parties (with a PC) via the electronic transmission of data. As defined by Gates, the Internet is Aa group of computers connected together, using standard >protocols= (descriptions of technologies) to exchange information.@ (94) Electronic massages are sent via phone lines from one computer to another and stored in the electronic Amailbox@ of the another computer until the message is Adown-loaded@ by the user. Another advantage to the Internet is AWeb browsing@ on the World Wide Web (.www) or simply AWeb.@ Server companies offer graphical pages of information to be accessed by subscribers of their service. From the Ahome@ page of a topic, one can activate subsequent hyperlinks for further information on given topics by clicking the mouse device of most PCs. Although Gates admits that Microsoft was surprised at the commercial success of the Internet, he has begun work on software applications to make the Internet easier to access for PC owners with limited computer knowledge. Some people may confuse the subscriptions to companies on the Internet, such as CompuServe, Prodigy, and America On-line with the creation of the information super highway, but according to Gates, the Internet is simply a Aprecursor to the information highway.@ (90) Comparing the information highway with the Internet is like comparing a country lane with the Eisenhower Highway System. Even that analogy would not do justice to the information highway as it will look in twenty or more years. The limitations of the Internet must first be expanded before anything resembling the actual information highway exists. One challenge that Micro-soft and other companies have is to convince the phone companies and cable companies to replace the coaxial lines that serve homes and businesses with fiber optic cables. Fiber optics will expand the bandwidth necessary for the immense amount of information sent on the highway. Two technologies currently in the works toward this transformation of trunk lines are DSVD and ISDN. Digital simultaneous voice data can be used with existing phone lines, but does not provide a sufficient bandwidth to handle video transmissions; hence, new lines must be laid for this application to reach full capacity. Even with the current integrated services digital network technologyCwhich incorporates a wider bandwidth but requires the laying of new linesCthe clarity of full motion picture images still leaves much to be desired. Add-in cards which upgrade the PC Ato support ISDN costs $500 in 1995, but the price should drop to less than $200 over the next few years. The line costs vary by location but are generally about $50 per month in the United States. I expect this will drop to less than $20, not much more than a regular phone connection.@ (Gates, 101) The Information Highway Once more and more PC owners hook up to the Internet with ISDN lines, the groundwork for further progress towards the information highway will be laid. The information highway was coined by then-Senator Al Gore Awhose father sponsored the 1956 Federal Aid Highway Act@ (Gates, 5) during the Eisenhower Administration. According to Gates, this terminology is flawed. It connotes the following of routes with distance between two points. It implies traveling from one place to another when the actual information highway will be free of such limitations. Some people also confuse the information highway with a massive government project which, Gates feels, A. . . would be a massive mistake for most countries . . . .@ (6) Just as Microsoft=s mission in 1975 was Aa computer on every desk and in every home,@ (Gates, 14) so it is with Microsoft progressing towards A. . . >information at your fingertips= which extols a benefit rather than the network itself.@ (Gates, 6) Key sucess factors 1) The high degree of expertise and product innovation 2) Being able to stay on the cutting edge of technology 3) Companies need to have a low degree of glitches in there programs 4) A very strong customer support system (user friendly) 5) Must be able to meet the customer needs The computer industry is a strong leader in technology. To compete you must stay one step ahead of the rest. Microsoft has proven how devoted they are to computer program developing by always being one step ahead of the rest. When one is dealing with the computer industry it is very important to have kniowledgable employees working for you. The high degr f:\12000 essays\technology & computers (295)\Misc Computer Essay.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Nothing attracts a crowd like a crowd. Today, with home computers and modems becoming faster and cheaper, the home front is on the break of a new frontier of on line information and data processing. The Internet, the ARPANET (Advanced Research Programs Agency Network) spinoff is a channel of uninterrupted information interchange. It allows people to connect to large computer databases that can store valuable information on goods and services. The Internet is quickly becoming a tool for vast data interchange for more than twenty million Americans. New tools are allowing Internet presence an easier task. As did the gold miners set out to California on carriages to stake their claim in the gold rush, business and entrepreneurs are rushing to stake their claim on the information superhighway through Gopher sites, World-Wide Web sites, and electronic mailing lists. This article explains how businesses and entrepreneurs are setting up information services on the Internet that allows users to browse through picture catalogues, specification lists, and up to the minute reports. Ever since Sears Roebuck created the first pictorial catalogue, the idea has fascinated US that merchandises could be selected and ordered in our leisure time. Like any cataloging system, references make it easy to find what user seeks. Since its inception, The Internet has been refining its search tools. Being able to find products through many catalogues is what make the Internet shine in information retrieval. This helps the consumer find merchandise that they might other wise probably cannot find. The World Wide Web allows users to find information on goods and services, pictures of products, samples of music (Used by record Companies), short videos showing the product or service, and samples of programs. Although a consumer cannot order directly from the Web site, the business will often give a Voice telephone number or an order form that costumer can print out and send out through the mail. Although web sites have the magazine like appeal, storing large amounts of textual data is often difficult. Gopher (like go-for) is set up like a filing cabinet to allow the user more flexibility in retrieval. Gopher is similar to the white/yellow pages in the way information is retrieved word for word. They are also a lot cheaper and easier to set up which allows small business an easy way to set up shop. Consumers can find reviews, tech-info, and other bits and pieces of information. Each person who uses the Internet has an identification that sets them apart from everyone else. Often called handles (from the old short wave radio days). Electronic mail addresses allow information exchange from user to user. Business can take advantage of this by sending current information to many users. A user must first subscribe to the mailing list. Then the computer adds them to the update list. Usually, companies will send out a monthly update. This informs users of upgrades in their products (usually software), refinements (new hardware drivers, faster code, bug fixes, etc.), new products, question bulletins where subscribers can post questions and answers, and links (addresses) to sites where new company information can be found. Comments and Opinions This article pointed out the key information that anyone who is interested in representing their company on the Internet might find useful. It then went into explaining the few key elements that comprise the complete and ever expanding system. It was also a fair lead way for the programs that they explained in the next articles on software used to create web pages, E-mail lists, Gopher sites and FTP (similar to Gopher). It showed the expanse at which the Internet was growing, and the use it could serve businesses to expand their user outreach. I have personally used these services to find business that sell hard to find products. Through the world wide web I have found specialty companies that I believe I would not have found. The article showed essentials of web savvy such as the availability of video and sound (music) files. For this consumer I can say that I have purchased at least two compact disks after hearing the short sample released by the record companies. The video clips are eye catching and may influence people to buy the companies products. I was disappointed in the information on Gopher. It mainly showed the differences between it and the world wide web, instead of explaining what it is. It also made an irrelevant reference to UNIX (Text based operating system used on expert systems) books' search and HTTP (the language that the World Wide Web reads) cross referencing might mislead the reader. Gopher is a very powerful tool that businesses with an on-line presence and information worth reading should be aware. The business related information on electronic mailing lists did nothing other then point out a few groups available. It briefly touched intelligent agents, which are the backbone of E-mail publications. Although it was detailed in publications, there was little theory of operation that a business looking into this route of information distribution might find of use. It did however explain the addressing system. Overall this article was decent in the overview of the business use of the Internet. It pointed out the three major areas that companies are racing to settle. It gave many useful information on the World-Wide Web, which is currently the business magnet. Reading this is article is a foot in the right direction for any business seeking to have an on-line presence. f:\12000 essays\technology & computers (295)\Modems.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Modems are used to connect two computers over a phone line. Modem is short for Modulator Demodulator. It's a device that converts data from digital computer signals to analog signals that can be sent over a phone line. This is called modulation. The analog signals are then converted back into digital data by the receiving modem. This is called demodulation. A modem is fed digital information, in the form of ones and zeros, from the CPU. The modem then analyzes this information and converts it to analog signals, that can be sent over a phone line. Another modem then receives these signals, converts them back into digital data, and sends the data to the receiving CPU. At connection time, modems send tones to each other to negotiate the fastest mutually supported modulation method that will work over whatever quality line has been established for that call. There are two main differences in the types of modems for PC, internal and external modems. Evolution of Modems In the last 10 years, modem users have gone from data transfer rates of 300bps to 1,200 bps to 2,400 bps to 9,600 bps to 14.4Kbps to 28.8Kbps to, and to 33.6Kbps. Now new modem standards are emerging, reaching speeds of up to 56Kbps. Unlike the 33.6Kbps modems being sold today, 56Kbps is a significant improvement over 28.8Kbps modems. Viewing complex graphics or downloading sound files improves significantly with 56Kbps. The modem experts keep telling us that we are about maxed out. For instance when the 28.8 modems where first introduced they said that we've reached our maximum speed, and the same thing was said about the 33.6 and now again for the 56K, but how true is this? The experts say that the next major improvement will have to come from the telephone companies, when they start laying down fibber-optic cables so we can have integrated services digital network (ISDN) . The thing that makes digital modems better than analog is because with analog modem transmission errors are very frequent which results in your modem freezing or just freaking out. These errors are caused mainly by some kind of noise on the line due to lightning storms, sunspots, and other fascinating electromagnetic phenomena, noise occurs anywhere on the line between your PC and the computer you're communicating with 2,000 miles away. Even if line noise is minimal, most modems will automatically reduce it's speed to avoid introducing data errors. Baud vs bps While taking about modems, the transmission speed is the source of a lot of confusion. The root of the problem is the fact that the terms "baud" and "bits per second" are used interchangeably. This is a result of the fact that it's easier to say "baud" than "bits per second," though misinformation has a hand in it, too. A baud is "A change in signal from positive to negative or vice-versa that is used as a measure of transmission speed" and bits per second is a measure of the number of data bits (digital 0's and 1's) transmitted each second in a communications channel. This is sometimes referred to as "bit rate." Individual characters (letters, numbers, spaces, etc.), also referred to as bytes, are composed of 8 bits. Technically, baud is the number of times per second that the carrier signal shifts value, for example a 1200 bit-per-second modem actually runs at 300 baud, but it moves 4 bits per baud (4 x 300 = 1200 bits per second). Synchronous vs. Asynchronous Data Transfer Synchronous and Asynchronous data transfer are two methods of sending data over a phone line. In synchronous data transmission, data is sent via a bit-stream, which sends a group of characters in a single stream. In order to do this, modems gather groups of characters into a buffer, where they are prepared to be sent as such a stream. In order for the stream to be sent, synchronous modems must be in perfect synchronization with each other. They accomplish this by sending special characters, called synchronization, or syn, characters. When the clocks of each modem are in synchronization, the data stream is sent. In asynchronous transmission, data is coded into a series of pulses, including a start bit and a stop bit. A start bit is sent by the sending modem to inform the receiving modem that a character is to be sent. The character is then sent, followed by a stop bit designating that the transfer of that bit is complete. Modems Speeds A full page of English text is about 16,000 bits. And in order to view full-motion full-screen video it would require roughly 10,000,000 bits-per-second, depending on data compression. The Past 300 bps (both ways) 1 200 bps (both ways) 2 400 bps (both ways) 9 600 bps (both ways) 14 400 bps (both ways) Current Speeds 28 000 bps (both ways) 33 600 bps (both ways) X2 or K56Plus 56 000 bps (downloading) 33 600 bps (uploading) ISDN single channel 64 000 bps (both ways) ISDN two channels 128 000 bps (both ways) SDSL 384 000 bps (both ways) Satellite integrated modem 400 000 bps (downloading) ADSL (T-1) 1 544 000 bps (downloading) 128 000 bps (uploading) Cable modem (T-1) 1 600 000 bps (both ways) (Videotron) Ethernet (T-2) 10 000 000 bps (both ways) Cable modem (T-2) 10 to 27 000 000 bps (both ways) (in general) FDDI (T-3) 100 000 000 bps (both ways) In some cases, the modem-equipped PC with a 28.8Kbps modem would be faster than a 33.6Kbps or even 56K modem, especially with sites that don't have a great deal of graphics. That's because there are several factors that determine how long it takes to reach and display a Web site. These include the speed of your PC, your connection to your Internet service provider, your ISP's connection to the Internet itself, traffic on the Internet and the speed and current traffic conditions on the site you're visiting. A good example would be, say you drive a fancy sports car and I drove along in my family minivan, you'll certainly beat me on an open stretch of road. But if we're both stuck in a traffic jam, you'll move just as slowly as me. In short, any modem will sometimes operate below its rated speed. According to the vice president of a major 33.6Kbps modem company, you can expect a full 33.6Kbps connection about one out of 10 tries. X2 56K Modem U.S. Robotics, Cardinal, Rockwell, and other manufacturers have developed modems capable of 56K speeds over standard phone lines. U.S. Robotics line of modems called X2, uses an "asymmetric" scheme. Basically, it lets you download data at up to 56Kbps from any on-line service or Internet service provider using matching U.S. Robotics modems. The company says AOL, Prodigy, Netcom, and others are committed to deploying the X2 technology. The only catch is the data you upload to the provider is still limited to 33.6Kbps or 28.8Kbps. The main reason why everyone has not yet leap to 56Kbps is because there are no set standards yet. Not all modem vendors are supporting the same 56Kbps specification. That means your Rockwell-based modem won't work with a U.S. Robotics or Logicode model. ISDN ISDN (Integrated Services Digital Network) is a way to move more data over existing regular phone lines. ISDN cards are like modems, but approximately 5 times faster then regular 28.8 modems. They require special telephone lines, which cost a little or a lot, depending on your phone company. It can provide speeds of roughly 128,000 bits-per-second over regular phone lines. ISDN has a couple of advantages. It uses the same pair wire found in regular phone lines, so the phone company won't necessarily have to run new wires into your house or business. A single physical ISDN line offers two 64Kbps phone lines called channels that can be used for voice and data. Unfortunately, ISDN isn't cheap. Installation fees can run a couple hundred dollars and setup can be confusing. ISDN also requires a special digital adapter for your PC that costs around $200. And though you could replace your old phone line with ISDN, I wouldn't recommend it. An ISDN line goes through a converter powered by AC current and if your power fails, so does your phone line. Satellite Modems The access service to Internet by satellite is called DirecPC. It was created by an American company of telecommunications called Hughes Network Systems Inc. DirecPC offers speeds of up to 400 Kbps. That's nearly 14 times faster than a standard 28.8Kbps modem and four times faster than ISDN (integrated system digital network). The draw back to this system is that it's too expensive, requires a relatively elaborate installation and configuration and, in the end, doesn't necessarily speed up your access to the World Wide Web. The price for the 21" dish, PC card and software is about $499 U.S. retail. Then there is a $49.95 U.S. one-time activation fee. The monthly charges start at $9.95 U.S., but that is for a limited account that also requires you to pay to download data. The "Moon Surfer" account, which costs $39.95 U.S., gives you unlimited access nights and weekends. If you want unlimited access during the day, you'll have to pay $129 U.S. a month for the "Sun Surfer" plan. Customers pay between $149 and $199 U.S. for professional help, or $89 U.S. per hour plus materials if custom installation is required. If you chose to install the dish on ground level, Hughes Network Systems also has designed a hollow fiber glass camouflage that looks like a huge rock which can be put over the dish in order to prevent it from it being stolen. In addition to these charges, you also need to be signed up with an Internet service provider, or ISP, which approximately costs about $20 a month. You can use any ISP other than on-line services such as Prodigy or America On-line. The reason you need an ISP is because DirecPC is a one-way system. The satellite sends data to your PC, but you need to use a standard modem and a regular ISP to send data or commands to the DirecPC network. The data you send flows at the speed of your modem, normally a 28.8 Kbps modem. The fact that the satellite is only one-way isn't as bad as it might seem. Most users send very little data compared with what they receive. If you wish to view a Web site, for example, you would send the Web address to the system via the modem, but the site's text and graphics would rush back to you via the satellite. Since the address is typically only a few bytes, that takes almost no time at all, even if you have a slow modem. The data from the site itself takes up far more time, especially if it has a lot of graphics. Those who upload a lot of data, including people who need to update their own Web sites, will get no advantage from the satellite system while they are uploading. In addition to the dish, you get a 16-bit card that plugs into an ISA port of a desktop PC. The draw back to the system is that it eliminates Macs, notebook PCs and any other machines that don't have available slots. You will find a noticeable difference when viewing sites with video and lots of graphics. This could eventually be a big advantage as an increasing number of information providers start using the Internet for full-motion video and other multimedia presentations. But DirecPC for now doesn't offer spectacular advantages for normal Web surfing. And if you're thinking about a long-term investment, consider that in the future there will be other options for high-speed Net access. ADSL / SDSL ADSL (Asymmetric Digital Subscriber Line) a method for moving data over regular phone lines. An ADSL circuit is much faster than a regular phone connection, and the wires coming into the subscriber's home are the same copper wires used for regular phone service. An ADSL circuit must be configured to connect two specific locations. A commonly used configuration of ADSL is to allow a subscriber to download data at speeds of up to 1.544 megabits per second, and to upload data at speeds of 128 kilobits per second. ADSL is often used as an alternative to ISDN, allowing higher speeds in cases where the connection is always to the same place. SDSL (Symmetrical Digital Subscriber Line) is a different configuration of ADSL capable of 384 Kilobits per second in both directions. Cable modems Another type of modems are cable modems. It uses the same black coaxial cable that connects millions of TVs nationwide and is also capable of carrying computer data at the same time. It's able to uploading and downloading approximately 10 to 27 megabits per second. A 500K file that would take 1.5 minutes to download via ISDN but would take about one second over cable. Classification Of Modems A classification of modems that are capable of carrying data at 1,544,000 bits-per- second are called T-1. At maximum capacity, a T-1 line could move a megabyte in less than 10 seconds. That is still not fast enough for full-screen, full-motion video, for which you need at least 10,000,000 bits-per-second. T-1 is the fastest speed commonly used to connect networks to the Internet. Modems that are capable of carrying data at 3,152,000 bits-per-second are refereed to as T-1C. Modems that are capable of carrying data at 6,312,000 bits-per-second are refereed to as T-2. And modems that are capable of carrying data at 44,736,000 bits-per-second are refereed to as T-3. This is more than enough to do full-screen, full-motion video. Modems that are capable of carrying data at 274,176,000 bits-per-second are refereed to as T-4. Ethernet A very common method of networking computers in a LAN (local area network) is called Ethernet. It will handle about 10,000,000 bits-per-second and can be used with almost any kind of computer. FDDI FDDI, (Fiber Distributed Data Interface) is a standard for transmitting data on optical fiber cables at a rate of around 100,000,000 bits-per-second. It's 10 times as fast as Ethernet, and approximately twice as fast as T-3. Most modems mentioned such as T-1, T-2, T-3, etc. are not intended for home use. These high speed connections are use mainly for big businesses. But even such speeds as T-4 and FDDI are use very little among big companies, but more of the Army, NASA, the Government, etc. They're highly priced which makes them only available to larger corporations and organizations who need to send huge amounts data from one place to another in little time or no time at all. Apart the price factor when would you need to transfer data that is on a CD-ROM disk holding it's full capacity (650 Mb) across the world in 52 seconds? f:\12000 essays\technology & computers (295)\Morality and Ethics and Computers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Morality and Ethics and Computers There are many different sides to the discussion on moral and ethical uses of computers. In many situations, the morality of a particular use of a computer is up to the individual to decide. For this reason, absolute laws about ethical computer usage is almost, but not entirely, impossible to define. The introduction of computers into the workplace has introduced many questions as well: Should employers make sure the workplace is designed to minimize health risks such as back strain and carpal tunnel syndrome for people who work with computers? Can employers prohibit employees from sending personal memos by electronic mail to a friend at the other side of the office? Should employers monitor employees' work on computers? If so, should employees be warned beforehand? If warned, does that make the practice okay? According to Kenneth Goodman, director of the Forum for Bioethics and Philosophy at the University of Miami, who teaches courses in computer ethics, "There's hardly a business that's not using computers."1 This makes these questions all the more important for today's society to answer. There are also many moral and ethical problems dealing with the use of computers in the medical field. In one particular case, a technician trusted what he thought a computer was telling him, and administered a deadly dose of radiation to a hospital patient.2 In cases like these, it is difficult to decide who's fault it is. It could have been the computer programmer's fault, but Goodman asks, "How much responsibility can you place on a machine?"3 Many problems also occur when computers are used in education. Should computers replace actual teachers in the classroom? In some schools, computers and computer manuals have already started to replace teachers. I would consider this an unethical use of computers because computers do not have the ability to think and interact on an interpersonal basis. Computers "dehumanize human activity"4 by taking away many jobs and making many others "boring exercises in pushing the buttons that make the technology work." 5 Complete privacy is almost impossible in this computer age. By using a credit card or check cashing card, entering a raffle, or subscribing to a magazine, people provide information about themselves that can be sold to marketers and distributed to data bases throughout the world. When people use the world-wide web, the sites they visit and download things from, make a record that can be traced back to the person.6 This is not protected, as it is when books are checked out of a library. Therefore, information about someone's personal preferences and interests can be sold to anyone. A health insurance company could find out if a particular person had bought alcohol or cigarettes and charge that person a higher rate because he or she is a greater health risk. Although something like this has not been reported yet, there are no laws against it, at this point. More and more data base companies are monitoring individuals with little regulation. "Other forms of monitoring-such as genetic screening-could eventually be used to discriminate against individuals not because of their past but because of statistical expectations about their future."7 For instance, people who do not have AIDS but carry the antibodies are being discharged from the U.S. military and also fired from some jobs. Who knows if this kind of medical information could lead employers to make decisions of employment based on possible future illnesses rather than on job qualifications. Is this an ethical use of computers? One aspect of computers that is surely immoral and unethical is computer crime, which has been on the rise lately. There are many different types of computer crime. Three main types of crimes are making computer viruses, making illegal copies of software, and actually stealing computers. Computer viruses have been around for a decade but they became infamous when the Michelangelo virus caused a scare on March 6, 1992. According to the National Computer Security Association in Carlisle, Pennsylvania, there are 6000 known viruses worldwide and about 200 new ones show up every month.8 These viruses are spread quickly and easily and can destroy all information on a computer's hard drive. Now, people must buy additional software just to detect viruses and possibly repair infected files. Making illegal copies of software is also a growing problem in the computer world. Most people find no problem in buying a computer program and giving a copy to their friend or co-worker. Some people even make copies and sell them to others. Software companies are starting to require computer users to type in a code before using the software. They do this in many ways. Sometimes, they require you to use a "code wheel" or look in a book for the code. The software companies go through this trouble to discourage people from making illegal copies because every copy that is made is money the company lost. One other thing that is just starting to become a problem is actual computer theft. With the introduction of notebook computers came a rise in computer theft. The same qualities that make these computers perfect for business travelers-their small size and light weight- make them very easy for thieves to steal as well. In 1994, 295,000 computers were reported stolen with resulting losses totaling over 981 million dollars. 9 The amount lost to theft is about twice the amount lost in all forms of computer malfunction or breakage. The biggest news related to computers lately seems to always be about the Internet. The Internet began decades ago, but is just becoming popular with the general public now that technology is advancing and becoming cheaper. There are many aspects of the Internet that can lead people into discussions concerning morality and ethics. Much of the discussion of the Internet has to do with freedom of speech and the First Amendment. Most Americans probably believe that the First Amendment is moral because it is a national law. The problems arise because different people interpret the First Amendment in different ways. In most cases since 1776, the First Amendment has been easily defined and understood, but every once in a while, a situation appears which blurs the lines. The Internet has caused one of these situations. There is information on the Internet about everything from drugs to making bombs. The United States government is trying to decide whether they should or should not censor material on the Internet. The government does not censor information like this in public libraries, so why should it censor this information on the Internet? The government censors information like this on television though, so why wouldn't it censor this on the Internet? If the government goes strictly by the First Amendment, it would not censor anything on the Internet because that would be a violation of free speech. It is obvious though, that the government does not always go directly by the First Amendment, so this leaves the topic open to discussion. Some people argue that this information would be dangerous if it got into the wrong hands. Much of the information in the world would be dangerous if it got into the wrong hands. Does this mean that we should perform background checks and psychiatric tests on everyone before we give them any information? I believe it is unethical to withhold information from anyone. All information should be given out freely. It is up to the individual to decide how to use the knowledge they have. Many people complain that there is a large number of sick and demented people on the Internet. There are a large number of sick and demented people in the "real" world as well. In fact, the same people who are on the Internet are in the real world, too. There is not much we can do about them except arrest the people who take their sickness and dementia too far and break the law. Computers can be harmful and beneficial to people in many different ways. The ways computers are beneficial are the most obvious. Computers can entertain us, they can save us time and energy, as well as saving us from performing boring and laborious tasks. Computers also can be physically harmful to people. People who use computers too much can suffer from vision loss, to varying degrees, due to staring at the screen for extended lengths of time . They can also have problems with the muscles in their hands from typing so often. They can acquire back problems from sitting in chairs behind desks at computer screens, all day long. Some people say that computers allow humans to cheat. They give us the answers. They allow us to stop thinking. They believe it is unethical for the computers to do the work for us. These people may be right in that some humans allow computers to do work for them, but then if people did not make use of the new inventions and time- savers, farmers would still be plowing with a horse and we'd still be cooking on an open fire. Until computers exhibit actual artificial intelligence, though, we are still the ones doing the thinking. We program the computers to do what we want them to do. In conclusion, I believe that, in most situations involving computers, the morality or immorality of an action is up to the individual to decide, as it would be if computers were not involved. We have seen, though, that there are many instances in which people have, without a doubt, acted immorally and unethically. 1 Timothy O'Conner, "Computers Creating Ethical Dilemmas," USA Today Magazine (September 1995) 7 2 Max Frankel, "Cyberrights," The New York Times Magazine (February 12, 1995) 26 3 O'Conner 7 4 James Coates, "Unabomber Case Underscores an On-Line Evil," Chicago Tribune (April 14, 1996) 5 5 Coates 5 6 O'Conner 7 7 Tom Forester, Computers in the Human Context (Cambridge: The MIT Press,1989) 403 8 Stephen A. Booht, "Doom Virus," Popular Mechanics (June 1995) 51 9 Philip Albinus, "Have You Seen This PC?," Home Office Computing (February 1996) 17 f:\12000 essays\technology & computers (295)\Multimedia Presentation Programs.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ COMPUTER MULTIMEDIA Sam Quesinberry Computers have come a long way very fast since there start in the 1940's. In the beginning they were mainly used for keeping financial records by banks and insurance companies, and for mathematical computations by engineers and the U.S. Military. However, exciting new applications have developed rapidly in the last few years. Two of these areas is Computer Graphics and sound. Computer graphics is the ability of the computer to display, store and transmit visual information in the form of pictures. Currently there are two main uses for this new ability. One is in the creation of Movies and the other in Computer Games. Computer visual information is also increasingly being used in other computer applications, such as photographic storage, and the Internet. Computers can also store, transmit and play back sound. When a picture or a sound is stored on a computer it said to be digitized. There are two main ways of digitizing a picture. One is by vector graphics. Here the information in the picture is stored as mathematical equations. Engineering drawing applications such as CAD (computer assisted device) use this method. The other method is by bit mapped graphics. Here the computer actually keeps track of every point in the picture and its description. Paint programs use this technique. Drawing programs are usually vector mapped programs and paint programs are usually bit mapped. Computer sound is handled in two different ways. The sound can be described digitally and stored as an image (wave format) of the actual sound or it can be translated in to what is called midi format. This is chiefly for music. In a piano, for instance, the information for what key to hit, for how long ad at what intensity is stored and retrieved. This is kind of like the way and old player piano worked. Computer graphic applications in the beginning were developed on large computes. The computer hardware and software were developed by individuals and groups working independently. These projects were very expensive and carried on by large companies and investment groups. Applications which only a few years ago would have cost millions of dollars, can now be run on a desk top computer with programs costing under $100. It is the purpose of this paper to research and examine several areas of computer multimedia by using a typical application programs in that related area. These areas are: Paint Programs - Photo Finish -Zsoft 3d Rendering Programs - 3d f/x - Asymetrix Animation Programs - Video Artist - Reveal Morphing Programs - Video Artist - Reveal Sound Recording Programs - MCS music rack - Logitech Midi Recording Programs - Midisoft recording Session - Logitech Multimedia Programs - Interactive - HSC software Paint Programs One of the fist paint programs was super paint. It was created by Carnegie Mellon Shoupe at Palo Alto Research Center. To demonstrate a paint program Photofinish by Zsoft will be used to import and modify a photograph. Photofinish is an inexpensive paint program costing under $50. First a photograph is scanned into the paint program using a scanner. The photograph is cleaned up and a title is added. 3 D Rendering Programs 3d rendering programs are programs used in the movies to create the special effects, such as those used in the movie Star Wars. The 3d rendering program was created over a period of time. They kept getting more advanced. Lucas Films is one of the first company's to develop 3d rendering programs for computers. The effects were one of the reasons that Lucas Film Productions became so popular. Here is an example of what a 3d rendering program can do. The name of the program that I'm using is 3d f/x by Asymetrix. Animation Programs One of the first company's to create animation software was Autodesk. The Disney studios were also one of the first company to develop animation software. A couple of years ago the Disney computer animation department only had two animators, but now there are 14. Computer animation has greatly reduced the human effort of making cartoons. A full length Disney film used to require over 600 animators. Now it can be done with approximately 125. The first full length computer animated movie, The Toy Story, came out around 6 months ago. Now the program that I am using to explore animation is Reveal's Computer's Artist. I captured an old cartoon from a 16 mm film made in 1913 and used computer artist to edit and digitize it to floppy disk. The cartoon can now be viewed under Windows using Multimedia player Morphing Programs Tom Brigham, a programmer and animator at NYIT, astounded the audience at the 1982 SIGGRAPH conference. Tom Brigham had created a video sequence showing a woman distort and transform herself into the shape of a lynx. Thus was born a new technique called "Morphing". It was destined to become a required tool for anyone producing computer graphics or special effects in the film or television industry. The morphing program that I am using to demonstrate the technique is "Reveal's Morph Editor". The following segment is a clip of my dad being morphed into my sister Sound Recording Programs Wave Files - Computer programs can be used to record and digitize actual sound. These applications were developed at the same time as the graphics applications. The sound is converted from some analog source such as radio, tape player, and live microphone and is stored to one of the computers mass storage devices such as hard disk or floppy disk. Software editors can then be used to edit the wave file. Special effects can be added such as noise reduction and reverb. The wave editor that I'm using to explore the computers ability to handle sound is from Logitech. I recorded a segment from an old 78 rpm record and used the editor to clean up the sound. It was tremendous improvement over the original recording. The following is a view of the editor window with the sound file loaded. Midi Recording Programs Midi Files - There is another method by which computers can record sound that is nothing like traditional sound recording. An actual musical instrument can be hooked to the computer and the computer records the actual notes struck, duration, intensity ,etc. This is an extremely efficient way to record music known as midi. The files created by this process are a fraction of the size of files created by waveform recording. This method may also be used even if there is no midi instrument. The notes can be entered or scanned into the computer from regular piece of sheet music. The computer is then able to translate these entries into the required midi file. The program I used to examine this technique is the Microsoft Midisoft recording studio. A piece of sheet music was actually entered into the computer one note at a time. If a synthesizer is used to play the file. The piece can be turned into a orchestral arrangement. This is a screen shot of the music loaded into the program. Multimedia Presentation Programs Finally, this is the class of programs which can be used to tie all the products of the foregoing programs together. Multimedia interactive programs allow the user to combine graphics, animation, sound, and interactive programs together into a presentation. These presentations can be slide shows of still images accompanied by music or sequences of animation. They can allow the user to be passive and merely watch or permit the user to interact by answering questions or specifying when the next event is to begin. The Internet itself can be thought of as an interactive application , but for this purposes of this paper I am only looking at a computer in a stand alone configuration. There are many programs which allow one to tie all multimedia elements together, but the one I have is Interactive by HSC software. The following are a few of the 100 slides we used to create a slide show of our trip to Olympic National Park. The show was linked to music on a CD ROM - the Music of Olympia. Once the slide show ran on the computer it was transferred to video tape by using a vga to television converter. f:\12000 essays\technology & computers (295)\Natural Language Processing.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ There have been high hopes for Natural Language Processing. Natural Language Processing, also known simply as NLP, is part of the broader field of Artificial Intelligence, the effort towards making machines think. Computers may appear intelligent as they crunch numbers and process information with blazing speed. In truth, computers are nothing but dumb slaves who only understand on or off and are limited to exact instructions. But since the invention of the computer, scientists have been attempting to make computers not only appear intelligent but be intelligent. A truly intelligent computer would not be limited to rigid computer language commands, but instead be able to process and understand the English language. This is the concept behind Natural Language Processing. The phases a message would go through during NLP would consist of message, syntax, semantics, pragmatics, and intended meaning. (M. A. Fischer, 1987) Syntax is the grammatical structure. Semantics is the literal meaning. Pragmatics is world knowledge, knowledge of the context, and a model of the sender. When syntax, semantics, and pragmatics are applied, accurate Natural Language Processing will exist. Alan Turing predicted of NLP in 1950 (Daniel Crevier, 1994, page 9): "I believe that in about fifty years' time it will be possible to program computers .... to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." But in 1950, the current computer technology was limited. Because of these limitations, NLP programs of that day focused on exploiting the strengths the computers did have. For example, a program called SYNTHEX tried to determine the meaning of sentences by looking up each word in its encyclopedia. Another early approach was Noam Chomsky's at MIT. He believed that language could be analyzed without any reference to semantics or pragmatics, just by simply looking at the syntax. Both of these techniques did not work. Scientists realized that their Artificial Intelligence programs did not think like people do and since people are much more intelligent than those programs they decided to make their programs think more closely like a person would. So in the late 1950s, scientists shifted from trying to exploit the capabilities of computers to trying to emulate the human brain. (Daniel Crevier, 1994) Ross Quillian at Carnegie Mellon wanted to try to program the associative aspects of human memory to create better NLP programs. (Daniel Crevier, 1994) Quillian's idea was to determine the meaning of a word by the words around it. For example, look at these sentences: After the strike, the president sent him away. After the strike, the umpire sent him away. Even though these sentences are the same except for one word, they have very different meaning because of the meaning of the word "strike". Quillian said the meaning of strike should be determined by looking at the subject. In the first sentence, the word "president" makes the word "strike" mean labor dispute. In the second sentence, the word "umpire" makes the word "strike" mean that a batter has swung at a baseball and missed. In 1958, Joseph Weizenbaum had a different approach to Artificial Intelligence, which he discusses in this quote (Daniel Crevier, 1994, page 133): "Around 1958, I published my first paper, in the commercial magazine Datamation. I had written a program that could play a game called "five in a row." It's like ticktacktoe, except you need rows of five exes or noughts to win. It's also played on an unbounded board; ordinary coordinate will do. The program used a ridiculously simple strategy with no look ahead, but it could beat anyone who played at the same naive level. Since most people had never played the game before, that included just about everybody. Significantly, the paper was entitled: "How to Make a Computer Appear Intelligent" with appear emphasized. In a way, that was a forerunner to my later ELIZA, to establish my status as a charlatan or con man. But the other side of the coin was that I freely started it. The idea was to create the powerful illusion that the computer was intelligent. I went to considerable trouble in the paper to explain that there wasn't much behind the scenes, that the machine wasn't thinking. I explained the strategy well enough that anybody could write that program, which is the same thing I did with ELIZA." ELIZA was a program written by Joe Weizenbaum which communicated to its user while impersonating a psychotherapist. Weizenbaum wrote the program to demonstrate the tricky alternatives to having programs look at syntax, semantics, or pragmatics. One of ELIZA's tricks was mirroring sentences. Another trick was to pick a sentence from earlier in the dialogue and return it attached to a leading phrase at random intervals Also, ELIZA would watch for a list of key words, transform it in some way, and return it attached to a leading sentence. These tricks worked well under the context of a psychiatrist who encourages patients to talk about their problems and answers their questions with other questions. However, these same tricks do not work well in other situations. In 1970, William Wood, AI researcher at Bolt, Beranek, and Newman, described an NLP method called Augmented Transition Network. (Daniel Crevier, 1994) Their idea was to look at the case of the word: agent (instigator of an event), instrument (stimulus or immediate physical cause of an event), and experiencer (undergoes effect of the action). To tell the case, Filmore put restrictions on the cases such as an agent had to be animate. For example, in "The heat is baking the cake", cake is inanimate and therefor the experiencer. Heat would be the instrument. An ATN could mix syntax rules with semantic props such as knowing a cake is inanimate. This worked out better than any other NLP technique to date. ATNs are still used in most modern NLPs. Roger Schank, Stanford researcher (Daniel Crevier, 1994, page 167): "Our aim was to write programs that would concentrate on crucial differences in meaning, not on issues of grammatical structure .... We used whatever grammatical rules were necessary in our quest to extract meanings from sentences but, to our surprise, little grammar proved to be relevant for translating sentences into a system of conceptual representations." Schank reduced all verbs to 11 basic acts. Some of them are ATRANS (to transfer an abstract relationship), PTRANS (to transfer the physical location of an object), PROPEL (to apply physical force to an object), MOVE (for its owner to move a body part), MTRANS (to transfer mental information), and MBUILD (to build new information out of old information). Schank called these basic acts semantic primitives. When his program saw in a sentence words usually relating to the transfer of possession (such as give, buy, sell, donate, etc.) it would search for the normal props of ATRANS: the object being transferred, its receiver and original owner, the means of transfer, and so on If the program didn't find these props, it would try another possible meaning of the verb. After successfully determining the meaning of the verb, the program would make inferences associated with the semantic primitive. For example, an ATRANS rule might be that if someone gets something they want, they may be happy about it and may use it. (Daniel Crevier, 1994) Schank implemented his idea of conceptual dependency in a program called MARGIE (memory, analysis, response generation in English.) MARGIE was a program that analyzed English sentences, turned them into semantic representations, and generated inferences from them. Take for example: "John went to a restaurant. He ordered a hamburger. It was cold when the waitress brought it. He left her a very small tip." MARGIE didn't work. Schank and his colleagues found that "any single sentence lends itself to so many plausible inferences that it was impossible to isolate those pertinent to the next sentence." For example, from "It was cold when the waitress brought it" MARGIE might say "The hamburger's temperature was between 75 and 90 degrees, The waitress brought the hamburger on a plate, She put the plate on a table, etc." The inference that cold food makes people unhappy would be so far down the line that it wouldn't be looked at and as a result MARGIE wouldn't have understood the story well enough to answer the question, "Why did John leave a small tip?" While MARGIE applied syntax and semantics well, it forgot about pragmatics. To solve this problem, Schank moved to Yale and teamed up with Professor of Psychology Robert Abelson. They realized that most of our everyday activities are linked together in chains which they called "scripts." (Daniel Crevier, 1994) In 1975, SAM (Script Applied Mechanism), written by Richard Cullingford, used an automobile accident script to make sense out of newspaper reports of them. SAM built internal representations of the articles using semantic primitives. SAM was the first working natural language processing program. SAM successfully went from message to intended meaning because it successfully implemented the steps in-between - syntax, semantics, and pragmatics. Despite the success of SAM, Schank said "real understanding requires the ability to establish connections between pieces of information for which no prescribed set of rules, or scripts, exist." (Daniel Crevier, 1994, page 167) So Robert Wilensky created PAM (Plan Applier Mechanism). PAM interpreted stories by linking sentences together through a character's goals and plans. Here is an example of PAM (Daniel Crevier, 1994): John wanted money. He got a gun and walked into a liquor store. He told the owner he wanted some money. The owner gave John the money and John left. In the process of understanding the story, PAM put itself in the shoes of the participants. From John's point of view: I needed to get some dough. So I got myself this gun, and I walked down to the liquor store. I told the shopkeeper that if he didn't let me have the money then I would shoot him. So he handed it over. Then I left. From the store owner's point of view: I was minding the store when a man entered. He threatened me with a gun and demanded all the cash receipts. Well, I didn't want to get hurt so I gave him the money. Then he escaped. A new idea from MIT is to grab bits and parts of speech and ask for more details from the user to understand what it didn't before and to understand better what it did before (G. McWilliams, 1993). In IBM's current NLP programs, instead of having rules for determining context and meaning, the program determines its own rules from the relationships between words in its input. For example, the program could add a new definition to the word "bad" once it realized that it is slang for "incredible." IBM also uses statistical probability to determine the meaning of a word. IBM's NLP programs also use a sentence-charting technique. For example, charting the sentence "The boy has left" and storing the boy as a noun phrase allows the computer to see that the subject of a following sentence beginning with "He" as "the boy." (G. McWilliams, 1993) In the 1950s, Noam Chomsky believed that NLP consisted only of syntax. With MARGIE, Roger Schank added semantics. By 1975, Robert Wilensky's PAM could handle pragmatics, too. And as Joe Weizenbaum did with ELIZA in 1958, over 35 years later IBM is adding tricks to its NLP programs. Natural Language Processing has had many successes - and many failures. How well can a computer understand us? f:\12000 essays\technology & computers (295)\Netware Salvage Utility.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ NetWare SALVAGE Utility One of NetWares most useful utilities is the Salvage utility, which is kind of a trade secret. One day a user will delete a couple of files or a complete directory accidentally, of course, and it will be the job of the LAN administrator to save the day because the files were the company's financial statements and they are due in a meeting yesterday. The NetWare 3.12 and 4.X SALVAGE utility is the extremely useful and sophisticated tool to recover these files. NetWare retains deleted files in the volume were the files originally resided. There they continue to pile up until the deleted files completely saturate this volume. When the volume becomes full with these images of the deleted files, the system begins purging, starting with the files that have been deleted for the longest period of time. The only exception to this, is files or directories that have been tagged with the purge attribute. As you can imagine these hidden deleted files can quickly eat up the space on a hard drive and the administrator will need to keep an eye on these so that the system is not unduly slowed down by the system purging to make room for saved and working files. These deleted files can also be purged manually with the SALVAGE utility, which is a great way to make sure that a file you don't want others to see is completely removed from the system!!! For a user or administrator to retrieve a file using SALVAGE, the create right (right to edit and read a directory area or file) must be assigned to the directory in which the file resides. If the directory still exists, the files are put back into the directory from which they were deleted. If the file being salvaged has the same name as a file that already exists, then a prompt will be presented to rename the file being salvaged. Since NetWare keeps track of the files by date and time several versions of the file may accumulate. When a directory is deleted, the method for recovery is a bit different. NetWare does not keep track of the directories, only the files. These files are stored in a hidden directory called DELETED.SAV. This directory exist in every volume on a network. The supervisor must go to this directory where the desired files can be copied to other directories to be completely recovered. Now that you have a simple explanation of the way the system works, lets look at the actual graphic user interface (GUI) that comes up when you type SALVAGE at the network DOS prompt. The main menu is below. As you can see, this simple menu is extremely user-friendly. Like all NetWare utilities, the only keys used are the Delete, Insert, F5, Escape and Enter. When you select the View/Recover Deleted Files option, a new menu appears prompting for the file string to locate. Like DOS, wild cards can be used or you can type the file name. The GUI is presented on the following page. The default for the search string is "*" ,the all wild card , and will display all the files deleted in the chosen directory. An example of this listing is presented below which shows the files that were deleted in a particular directory. You can very simply undelete one of these files by highlighting the file, marking multiples with the F5 key, and pressing the Enter key. A message box then appears prompting you to verify the file(s) to be recovered. Selecting the YES command button will recover the file. It is as simple as that. If the you need to change to a different directory all you have to do is select the Select Current Directory option from the main menu. This will bring up a current path display window and a network directory window in which to make the changes to the path. As you look at the example below, you will see that all you need to do is highlight the Network Directories window option and press the Enter key until the path window displays the path you want. Once at the desired path, press the Escape key to go back to the main menu and select the View/Recover Deleted Files option and do the same as before. Well, this is all there is to recovering a file from a network using NetWare. It also is another great example of how things that are deleted from a network drive are still accessible, so if you want a very important company document to be purged, you will have to delete it from SALVAGE or mark it with the purge attribute. f:\12000 essays\technology & computers (295)\Neural Networks.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Neural Networks A neural network also known as an artificial neural network provides a unique computing architecture whose potential has only begun to be tapped. They are used to address problems that are intractable or cumbersome with traditional methods. These new computing architectures are radically different from the computers that are widely used today. ANN's are massively parallel systems that rely on dense arrangements of interconnections and surprisingly simple processors (Cr95, Ga93). Artificial neural networks take their name from the networks of nerve cells in the brain. Although a great deal of biological detail is eliminated in these computing models, the ANN's retain enough of the structure observed in the brain to provide insight into how biological neural processing may work (He90). Neural networks provide an effective approach for a broad spectrum of applications. Neural networks excel at problems involving patterns, which include pattern mapping, pattern completion, and pattern classification (He95). Neural networks may be applied to translate images into keywords or even translate financial data into financial predictions (Wo96). Neural networks utilize a parallel processing structure that has large numbers of processors and many interconnections between them. These processors are much simpler than typical central processing units (He90). In a neural network, each processor is linked to many of its neighbors so that there are many more interconnections than processors. The power of the neural network lies in the tremendous number of interconnections (Za93). ANN's are generating much interest among engineers and scientists. Artificial neural network models contribute to our understanding of biological models. They also provide a novel type of parallel processing that has powerful capabilities and potential for creative hardware implementations, meets the demand for fast computing hardware, and provides the potential for solving application problems (Wo96). Neural networks excite our imagination and relentless desire to understand the self, and in addition, equip us with an assemblage of unique technological tools. But what has triggered the most interest in neural networks is that models similar to biological nervous systems can actually be made to do useful computations, and furthermore, the capabilities of the resulting systems provide an effective approach to previously unsolved problems (Da90). Neural network architectures are strikingly different from traditional single-processor computers. Traditional Von Neumann machines have a single CPU that performs all of its computations in sequence (He90). A typical CPU is capable of a hundred or more basic commands, including additions, subtractions, loads, and shifts. The commands are executed one at a time, at successive steps of a time clock. In contrast, a neural network processing unit may do only one, or, at most, a few calculations. A summation function is performed on its inputs and incremental changes are made to parameters associated with interconnections. This simple structure nevertheless provides a neural network with the capabilities to classify and recognize patterns, to perform pattern mapping, and to be useful as a computing tool (Vo94). The processing power of a neural network is measured mainly be the number of interconnection updates per second. In contrast, Von Neumann machines are benchmarked by the number of instructions that are performed per second, in sequence, by a single processor (He90). Neural networks, during their learning phase, adjust parameters associated with the interconnections between neurons. Thus, the rate of learning is dependent on the rate of interconnection updates (Kh90). Neural network architectures depart from typical parallel processing architectures in some basic respects. First, the processors in a neural network are massively interconnected. As a result, there are more interconnections than there are processing units (Vo94). In fact, the number of interconnections usually far exceeds the number of processing units. State-of-the-art parallel processing architectures typically have a smaller ratio of interconnections to processing units (Za93). In addition, parallel processing architectures tend to incorporate processing units that are comparable in complexity to those of Von Neumann machines (He90). Neural network architectures depart from this organization scheme by containing simpler processing units, which are designed for summation of many inputs and adjustment of interconnection parameters. The two primary attractions that come from the computational viewpoint of neural networks are learning and knowledge representation. A lot of researchers feel that machine learning techniques will give the best hope for eventually being able to perform difficult artificial intelligence tasks (Ga93). Most neural networks learn from examples, just like children learn to recognize dogs from examples of dogs (Wo96). Typically, a neural network is presented with a training set consisting of a group of examples from which the network can learn. These examples, known as training patterns, are represented as vectors, and can be taken from such sources as images, speech signals, sensor data, and diagnosis information (Cr95, Ga93). The most common training scenarios utilize supervised learning, during which the network is presented with an input pattern together with the target output for that pattern. The target output usually constitutes the correct answer, or correct classification for the input pattern. In response to these paired examples, the neural network adjusts the values of its internal weights (Cr95). If training is successful, the internal parameters are then adjusted to the point where the network can produce the correct answers in response to each input pattern (Za93). Because they learn by example, neural networks have the potential for building computing systems that do not need to be programmed (Wo96). This reflects a radically different approach to computing compared to traditional methods, which involve the development of computer programs. In a computer program, every step that the computer executes is specified in advance by the network. In contrast, neural nets begin with sample inputs and outputs, and learns to provide the correct outputs for each input (Za93). The neural network approach does not require human identification of features. It also doesn't require human development of algorithms or programs that are specific to the classification problem at hand. All of this will suggest that time and human effort can be saved (Wo96). There are drawbacks to the neural network approach, however. The time to train the network may not be known, and the process of designing a network that successfully solves an applications problem may be involved. The potential of the approach, however, appears significantly better than past approaches (Ga93). Neural network architectures encode information in a distributed fashion. Typically the information that is stored in a neural network is shared by many of its processing units. This type of coding is in stark contrast to traditional memory schemes, where particular pieces of information are stored in particular locations of memory. Traditional speech recognition systems, for example, contain a lookup table of template speech patterns that are compared one by one to spoken inputs. Such templates are stored in a specific location of the computer memory. Neural networks, in contrast, identify spoken syllables by using a number of processing units simultaneously. The internal representation is thus distributed across all or part of the network. Furthermore, more than one syllable or pattern may be stored at the same time by the same network (Ze93). Neural networks have far-reaching potential as building blocks in tomorrow's computational world. Already, useful applications have been designed, built, and commercialized, and much research continues in hopes of extending this success (He95). Neural network applications emphasize areas where they appear to offer a more appropriate approach than traditional computing has. Neural networks offer possibilities for solving problems that require pattern recognition, pattern mapping, dealing with noisy data, pattern completion, associative lookups, and systems that learn or adapt during use (Fr93, Za93). Examples of specific areas where these types of problems appear include speech synthesis and recognition, image processing and analysis, sonar and seismic signal classification, and adaptive control. In addition, neural networks can perform some knowledge processing tasks and can be used to implement associative memory (Kh90). Some optimization tasks can be addressed with neural networks. The range of potential applications is impressive. The first highly developed application was handwritten character identification. A neural network is trained on a set of handwritten characters, such as printed letters of the alphabet. The network training set then consists of the handwritten characters as inputs together with the correct identification for each character. At the completion of training, the network identifies handwritten characters in spite of the variations (Za93). Another impressive application study involved NETtalk, a neural network that learns to produce phonetic strings, which in turn specify pronunciation for written text. The input to the network in this case was English text in the form of successive letters that appear in sentences. The output of the network was phonetic notation for the proper sound to produce given the text input. The output was linked to a speech generator so that an observer could hear the network learn to speak. This network, trained by Sejnowski and Rosenberg, learned to pronounce English text with a high level of accuracy (Za93). Neural network studies have also been done for adaptive control applications. A classic implementation of a neural network control system was the broom-balancing experiment, originally done by Widrow and Smith in 1963. The network learned to move a cart back and forth in such a way that a broom balanced upside-down on its handle tip and the cart remained on end (Da90). More recently, application studies were done for teaching a robotic arm how to get to its target position, and for steadying a robotic arm. Research was also done on teaching a neural network to control an autonomous vehicle using simulated, simplified vehicle control situations (Wo96). Neural networks are expected to complement rather than replace other technologies. Tasks that are done well by traditional computer methods need not be addressed with neural networks, but technologies that complement neural networks are far-reaching (He90). For example, expert systems and rule-based knowledge-processing techniques are adequate for some applications, although neural networks have the ability to learn rules more flexibly. More sophisticated systems may be built in some cases from a combination of expert systems and neural networks (Wo96). Sensors for visual or acoustic data may be combined in a system that includes a neural network for analysis and pattern recognition. Robotics and control systems may use neural network components in the future. Simulation techniques, such as simulation languages, may be extended to include structures that allow us to simulate neural networks. Neural networks may also play a new role in the optimization of engineering designs and industrial resources (Za93). Many design choices are involved in developing a neural network application. The first option is in choosing the general area of application. Usually this is an existing problem that appears amenable to solutions with a neural network. Next the problem must be defined specifically so that a selection of inputs and outputs to the network may be made. Choices for inputs and outputs involve identifying the types of patterns to go into and out of the network. In addition, the researcher must design how those patterns are to represent the needed information. Next, internal design choices must be made. This would include the topology and size of the network (Kh90). The number of processing units are specified, along with the specific interconnections that the network is to have. Processing units are usually organized into distinct layers, which are either fully or partially interconnected (Vo95). There are additional choices for the dynamic activity of the processing units. A variety of neural net paradigms are available. Each paradigm dictates how the readjustment of parameters takes place. This readjustment results in learning by the network. Next there are internal parameters that must be tuned to optimize the ANN design (Kh90). One such parameter is the learning rate from the back-error propagation paradigm. The value of this parameter influences the rate of learning by the network, and may possibly influence how successfully the network learns (Cr95). There are experiments that indicate that learning occurs more successfully if this parameter is decreased during a learning session. Some paradigms utilize more than one parameter that must be tuned. Typically, network parameters are tuned with the help of experimental results and experience on the specific applications problem under study (Kh90). Finally, the selection of training data presented to the neural network influences whether or not the network learns a particular task. Like a child, how well a network will learn depends on the examples presented. A good set of examples, which illustrate the tasks to be learned well, is necessary for the desired learning to take place. The set of training examples must also reflect the variability in the patterns that the network will encounter after training (Wo96). Although a variety of neural network paradigms have already been established, there are many variations currently being researched. Typically these variations add more complexity to gain more capabilities (Kh90). Examples of additional structures under investigation include the incorporation of delay components, the use of sparse interconnections, and the inclusion of interaction between different interconnections. More than one neural net may be combined, with outputs of some networks becoming the inputs of others. Such combined systems sometimes provide improved performance and faster training times (Da90). Implementations of neural networks come in many forms. The most widely used implementations of neural networks today are software simulators. These are computer programs that simulate the operation of the neural network. The speed of the simulation depends on the speed of the hardware upon which the simulation is executed. A variety of accelerator boards are available for individual computers to speed the computations (Wo96). Simulation is key to the development and deployment of neural network technology. With a simulator, one can establish most of the design choices in a neural network system. The choice of inputs and outputs can be tested as well as the capabilities of the particular paradigm used (Wo96). Implementations of neural networks are not limited to computer simulation, however. An implementation could be an individual calculating the changing parameters of the network using pencil and paper. Another implementation would be a collection of people, each one acting as a processing unit, using a hand-held calculator (He90). Although these implementations are not fast enough to be effective for applications, they are nevertheless methods for emulating a parallel computing structure based on neural network architectures (Za93). One challenge to neural network applications is that they require more computational power than readily available computers have, and the tradeoffs in sizing up such a network are sometimes not apparent from a small-scale simulation. The performance of a neural network must be tested using a network the same size as that to be used in the application (Za93). The response of an ANN may be accelerated through the use of specialized hardware. Such hardware may be designed using analog computing technology or a combination of analog and digital. Development of such specialized hardware is underway, but there are many problems yet to be solved. Such technological advances as custom logic chips and logic-enhanced memory chips are being considered for neural network implementations (Wo96). No discussion of implementation would be complete without mention of the original neural networks, which is the biological nervous systems. These systems provided the first implementation of neural network architectures. Both systems are based on parallel computing units that are heavily interconnected, and both systems include feature detectors, redundancy, massive parallelism, and modulation of connections (Vo94, Gr93). However the differences between biological systems and artificial neural networks are substantial. Artificial neural networks usually have regular interconnection topologies, based on a fully connected, layered organization. While biological interconnections do not precisely fit the fully connected, layered organization model, they nevertheless have a defined structure at the systems level, including specific areas that aggregate synapses and fibers, and a variety of other interconnections (Lo94, Gr93). Although many connections in the brain may seem random or statistical, it is likely that considerable precision exists at the cellular and ensemble levels as well as the system level. Another difference between artificial and biological systems arises from the fact that the brain organizes itself dynamically during a developmental period, and can permanently fix its wiring based on experiences during certain critical periods of development. This influence on connection topology does not occur in current ANN's (Lo94, Da90). The future of neurocomputing can benefit greatly from biological studies. Structures found in biological systems can inspire new design architectures for ANN models (He90). Similarly, biology and cognitive science can benefit from the development of neurocomputing models. Artificial neural networks do, for example, illustrate ways of modeling characteristics that appear in the human brain (Le91). Conclusions, however, must be carefully drawn to avoid confusion between the two types of systems. REFERENCES [Cr95] Cross, et, Introduction to Neural Networks", Lancet, Vol. 346 (October 21, 1995), pp 1075. [Da90] Dayhoff, J. E. Neural Networks: An Introduction, Van Nostrand Reinhold, New York, 1990. [Fr93] Franklin, Hardy, "Neural Networking", Economist, Vol. 329, (October 9, 1993), pp 19. [Ga93] Gallant, S. I. Neural Network Learning and Expert Systems, MIT Press, Massachusetts, 1993. [Gr93] Gardner, D. The Neurobiology of Neural Networks, MIT Press, Massachusetts, 1993. [He90] Hecht-Nielsen, R. Neurocomputing, Addison-Wesley Publishing Company, Massachusetts, 1990. [He95] Helliar, Christine, "Neural Computing", Management Accounting, Vol. 73 (April 1, 1995), pp 30. [Kh90] Khanna, T. Foundations of Neural Networks, Addison-Wesley Publishing Company, Massachusetts, 1990. [Le91] Levine, D. S. Introduction to Neural & Cognitive Modeling, Lawrence Erlbaum Associates Publishers, New Jersey, 1991. [Lo94] Loofbourrow, Tod, "When Computers Imitate the Workings of Brain", Boston Business Journal, Vol. 14 (June 10, 1994), pp 24. [Vo94] Vogel, William, "Minimally Connective, Auto-Associative, Neural Networks", Connection Science, Vol. 6 (January 1, 1994), pp 461. [Wo96] Internet Information. http://www.mindspring.com/~zsol/nnintro.html http://ourworld.compuserve.com/homepages/ITechnologies/ http://sharp.bu.edu/inns/nn.html http://www.eeb.ele.tue.nl/neural/contents/neural_networks.html http://www.ai.univie.ac.at/oefai/nn/ http://www.nd.com/welcome/whatisnn.htm http://www.mindspring.com/~edge/neural.html http://vita.mines.colorado.edu:3857/lpratt/applied-nnets.html [Za93] Zahedi, F. Intelligent Systems for Business: Expert Systems with Neural Networks, Wadsworth Publishing Company, California, 1993. f:\12000 essays\technology & computers (295)\None.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Organic Molecules Challenge Silicon's Reign as King of Semiconductors There is a revolution fomenting in the semiconductor industry. It may take 30 years or more to reach perfection, but when it does the advance may be so great that today's computers will be little more than calculators compared to what will come after. The revolution is called molecular electronics, and its goal is to depose silicon as king of the computer chip and put carbon in its place. The perpetrators are a few clever chemists trying to use pigment, proteins, polymers, and other organic molecules to carry out the same task that microscopic patterns of silicon and metal do now. For years these researchers worked in secret, mainly at their blackboards, plotting and planning. Now they are beginning to conduct small forays in the laboratory, and their few successes to date lead them to believe they were on the right track. "We have a long way to go before carbon-based electronics replace silicon-based electronics, but we can see now that we hope to revolutionize computer design and performance," said Robert R. Birge, a professor of chemistry, Carnegie-Mellon University, Pittsburgh. "Now it's only a matter of time, hard work, and some luck before molecular electronics start having a noticeable impact." Molecular electronics is so named because it uses molecules to act as the "wires" and "switches" of computer chips. Wires, may someday be replaced by polymers that conduct electricity, such as polyacetylene and polyphenylenesulfide. Another candidate might be organometallic compounds such as porphyrins and phthalocyanines which also conduct electricity. When crystallized, these flat molecules stack like pancakes, and metal ions in their centers line up with one another to form a one-dimensional wire. Many organic molecules can exist in two distinct stable states that differ in some measurable property and are interconvertable. These could be switches of molecular electronics. For example, bacteriorhodpsin, a bacterial pigment, exists in two optical states: one state absorbs green light, the other orange. Shinning green light on the green-absorbing state converts it into the orange state and vice versa. Birge and his coworkers have developed high density memory drives using bacteriorhodopsin. Although the idea of using organic molecules may seem far-fetched, it happens every day throughout nature. "Electron transport in photosynthesis one of the most important energy generating systems in nature, is a real-world example of what we're trying to do," said Phil Seiden, manager of molecular science, IBM, Yorkstown Heights, N.Y. Birge, who heads the Center for Molecular Electronics at Carnegie-Mellon, said two factors are driving this developing revolution, more speed and less space. "Semiconductor chip designers are always trying to cram more electronic components into a smaller space, mostly to make computers faster," he said. "And they've been quite good at it so far, but they are going to run into trouble quite soon." A few years ago, for example, engineers at IBM made history last year when they built a memory chip with enough transistors to store a million bytes if information, the megabyte. It came as no big surprise. Nor did it when they came out with a 16-megabyte chip. Chip designers have been cramming more transistors into less space since Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor first showed how to put multitudes on electronic components on a slab of silicon. But 16 megabytes may be near the end of the road. As bits get smaller and loser together, "crosstalk" between them tends to degrade their performance. If the components were pushed any closer they would short circuit. Physical limits have triumphed over engineering. That is when chemistry will have its day. Carbon, the element common to all forms of life, will become the element of computers too. "That is when we see electronics based on inorganic semiconductors, namely silicon and gallium arsenide, giving way to electronics based on organic compounds," said Scott E. Rickert, associate professor of macromolecular science, Case Western Reserve University, Cleveland, and head of the school's Polymer Microdevice Laboratory. "As a result," added Rickert, "we could see memory chips store billions of bytes of information and computers that are thousands times faster. The science of molecular electronics could revolutionize computer design." But even if it does not, the research will surely have a major impact on organic chemistry. "Molecular electronics presents very challenging intellectual problems on organic chemistry, and when people work on challenging problems they often come up with remarkable, interesting solutions," said Jonathan S. Lindsey, assistant professor of chemistry, Carnegie-Mellon University. "Even if the whole field falls through, we'll still have learned a remarkable amount more about organic compounds and their physical interactions than we know now. That's why I don't have any qualms about pursuing this research." Moreover, many believe that industries will benefit regardless of whether an organic-based computer chip is ever built. For example, Lindsey is developing an automated system, as well as the chemistry to go along with it, for synthesizing complex organic compounds analogous to the systems now available for peptide and nucleotide synthesis. And Rickert is using technology he developed foe molecular electronic applications to make gas sensors that are both a thousand times faster and more sensitive than conventional sensors. For now, the molecular electronics revolution is in the formative stage, and most of the investigations are still basic more than applied. One problem with which researchers are beginning to come to grips, though, is determining the kinds if molecules needed to make the transistors and other electronic components that will go into the molecular electronic devices, Some of the molecules are like bacteriorhodopsin in that their two states flip back and forth when exposed to wavelengths of light. These molecules would be the equivalent of an optical switch on which on state is on and the other state is off. Optical switches have been difficult to make from standard semiconductors. bacteriorhodopsin is the light-harvested pigment of purple bacteria living in salt marshes outside San Francisco. The compound consists of a pigment core surrounded by a protein that stabilizes the pigment. Birge has capitalized on the clear cut distinction between the two states of bacteriorhodopsin to make readable-write able optical memory devices. Laser disks, are read-only optical memory devices, once encoded the data cannot be changed. Birge has been able to form a thin film of bacteriorhodopsin on quartz plates that can then be used as optical memory disks. The film consists of a thousand one-molecule thick layers deposited one layer at a time using the Langmuir-Blodgett technique. A quartz plate is dipped into water whose surface is covered with bacteriorhodopsin. When the plate is withdrawn at a certain speed, a monolayer of rhodopsin adheres to the plate with all the molecules oriented in the same direction. Repeating this process deposits a second layer, then a third, and so on. Information is stored by assigning 0 to the green state and 1 to the orange state. Miniature lasers of the type use din fiber optic communications devices are used to switch between the two states. Irradiating the disk with a green laser converts the green state to the orange state, storing a 1. resetting the bit is accomplished by irradiating the same small area of the dusk with a red laser. Data stored on the disk are read by using both lasers. The disk would be scanned with the red laser and any bit with a value 1 would be reset using the green laser. This is analogous to the way in which both magnetic and electrical memories are read today, but with one important difference: "Because the two states take only five picoseconds (five trillionths of a second) to flip back and forth, information storage and retrieval are much faster than anything you could ever do magnetically or electrically," explained Birge. In theory, each pigment molecule could store one bit of information. In practice, however approximately 100,000 molecules are sued. The laser beam as a diameter if approximately 10 molecules and penetrates through the 1,000 molecule think layer. Although this reduces the amount of information that can be stored on each disk, it does provide fidelity though redundancy. "We can have half the molecules or more in a disk fall apart and there would still be enough excited by the laser at each spot to provide accurate data storage," said Birge. And even using 100,000 molecules per data bit, an old 5.25 inch floppy disk could store well over 500 megabytes of data. One drawback to this system is that the bacteriorhodopsin's two states are only stable at liquid nitrogen temperatures, -192ĦC. But Birge does not see this as anything more than a short term problem. "We're now using genetic engineering to modify the protein part of the molecule so that it will stabilize the two states at room temperature," he said. "Based in outstanding work, we don't think this will be a problem." Faster, higher-density disk storage is a laudable goal, but the big stakes are in improving on semiconductor components. Birge, for example, is developing a random access chip using the bacteriorhodopsin system. Instead of having millions of transistors wired together on a slab of silicon, there would be millions of tiny lasers pointed at a film of bacteriorhodopsin. "These RAM chips would actually be a little bigger than what we have," he said, "but they would still be 1,000 times faster because the molecular components work so much faster than ones made of semiconductor materials." Recently, Theodre O. Poehler, director of research, John Hopkin's Applied Physics Laboratory, Laurel, Md., and Richard S. Potember, a senior chemist there, built a working four-byte RAM chip using molecular charge-transfer system. Four bytes may seem crude compared to the million-byte chip built by IBM, but the first semiconductor chip, built by Texas Instruments' Kilby in 1959, was also crude compared to today's chips. Poehler and Potember's system also uses laser light to activate the molecular switches, but the chemistry is much different than Birge's. In the Carnegie-Mellon system, light causes an electron on the bacteriorhodopsin to move into a higher energy level within the same molecule. This changes its absorption spectrum. In the Hopkin's system, light causes an electron to transfer between two different molecules, one called an electron donor, the other an electron acceptor. This is known as a charge-transfer reaction, and the researchers in several laboratories are designing devices using this type of molecular switch. In their system, Poehler and Potember use compounds formed form either copper or silver- the electron donor-and the tetracyaboquinodimethane (TCNQ) or various derivatives-the electron acceptor. The researchers first deposit the metal onto some substrate-it could be either a silicon or plastic slab. Next, they deposit a solution of the organic electron acceptor onto the metal and heat it gently, causing a reaction to occur and evaporating the solvent. In the equilibrium state between these two molecular components, an electron is transferred from copper to TCNQ, forming a positive metal ion and a negative TCNQ ion. Irradiating this complex with light from an argon laser causes the reverse reaction to occur, forming neutral metal and neutral TCNQ. Two measurable changed accompany this reaction. One is that the laser-lit area changes color from blue to a pale yellow if the metal is copper or from violet if it is silver. This change is easily detected using the same or another laser. Thus, metal TCNQ films, like those made from bacteriorhodopsin, could serve as optical memory storage devices. Poehler said that they have already built several such devices and are now testing their performance. They work at room temperature. The other change that occurs, however, is more like those that take place on standard microelectronics switches. When an electric field id applied to the organometallic film, it becomes conducting in the irradiated area, just as a semiconductor does when an electric field is applied to it. Erasing a data or closing the switch is accomplished using any low-intensity laser, including carbon dioxide, neodymium yttrium aluminum garnet, or gallium arsenide devices. The tiny amount of heat generated by the laser beam causes the metal and TCNQ to return to their equilibrium, non-conducting state. Turning off the applied voltage also returns the system to its non-conducting state. The Hoptkins researchers found they could tailor the on/off behavior of this system by changing the electron acceptor. Using relative weak electron acceptors, such as dimethoxy-TCNQ, produced organometallic films with a very sharp on/off behavior. But of a strong electron acceptor such as tetrafluoro-TCNQ is used, the film remains conductive even when the applied field is removed. This effect can last from several minutes to several days; the stronger the electron acceptor, the longer the memory effect. Poehler and his colleagues are now working to optimize the electrical and optical behavior of these materials. They have found, for example, that films made with copper last longer than those made of silver. In addition, they are testing various substrates and coatings to further stabilize these systems. "We know the system works," Poehler said. "Now we're trying to develop it into a system that will work in microelectronics applications." At Case Wester Rickert is also trying to make good organic chemistry and turn it into something workable in microelectronics. He and his coworkers have found that using Langmuir-Blodgett techniques they can make polymer films actually look like and behave like metal foils. "The polymer molecules are arranged in a very regular, ordered array, as if they were crystalline," said Rickert. These foils, made from polymers such as polyvinylstearate, behave much as metal oxide films do in standard semiconductor devices. but transistors made with the organic foils are 20 percent faster than their inorganic counterparts, and require much less energy to make and process. Early in 1986, Rickert made a discovery about these films that could have a major impact on the chemical industry long before any aspect of molecular electronics. "the electrical behavior of these foils is very sensitive to environmental changes such as temperature, pressure, humidity and chemical composition," he said. "As a result, they make very good chemical sensors, better than any sensor yet developed." He has been able to develop an integrated sensor that to date can measure parts per billion concentrations of nitrogen oxides, carbon dioxide, oxygen, and ammonia. Moreover, it can measure all four simultaneously. Response times for the new "supersniffer," as Rickert calls the sensor, are in the millisecond range, compared to tens of seconds for standard gas sensors, Recovery times are faster too; under five seconds compared to minutes or hours. The Case Western team is now using polymer foils as electrochemical and biochemical detectors. In spite of such successes, molecular electronics researchers point out that MEDs will never replace totally those made of silicon and other inorganic semiconductors. "Molecular electronics will never make silicon technology obsolete," said Carnegie-Mellon's Birge. "The lasers we will need, for example, will probably be built from gallium arsenide crystals on silicon wafers. "But molecular electronic devices will replace many of those now made with silicon and the combination of the two technologies should revolutionize computer design and function." f:\12000 essays\technology & computers (295)\Nonverbal comm.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CHAPTER 1: Rationale and Literature Review Magnafix says, "Have you figured out the secret entrance to Kahn Draxen's castle?" Newtrik sighs deeply. Newtrik says, "I think so, but I haven't found the stone key yet!" Magnafix grins mischievously. Magnafix gives a stone key to Newtrik. Newtrik smiles happily. Newtrik shakes hands with Magnafix. Newtrik says, "Thanks!" Magnafix grins broadly and says, "No problem..." Newtrik leaves west. Introduction Purpose The purpose of this thesis is to investigate the communicative phenomena to be found in those environments known as Internet MUDs, or Multi-User Dimensions. These text-based virtual realities are presently available to students and faculty at most learning institutions, as well as anyone with a computer and a modem. Though the term "virtual reality" has become connected for many with visions of fancy headgear and million dollar gloves, MUDs require no such hardware. They are, however, a form of virtual reality, "because they construct enduring places, objects, and user identities. These objects have characteristics that define and constrain how users can interact with them," (Holmes & Dishman, 1994, p. 6). Having been created in their most rudimentary form nearly two decades ago, the technology that supports MUD interaction is well developed and has spawned a new variety of communicative environment, one that thousands if not millions of users have found fiercely compelling. Since MUDs are generally restricted to text-based interaction (some support ANSI codes, and the graphical MUDs are gaining popularity), one might expect that the interactions therein are characterized by a lack of regulating feedback, dramaturgical weakness, few status cues, and social anonymity, as Kiesler and her colleagues have suggested (Kiesler, Siegal, & McGuire, 1984). While these characteristics may be readily attributable to the majority of interactions within experiments on computer conferencing and electronic mail, such is not the case for MUDs, as each (there are hundreds) is a rich culture unto itself, as will be shown. This thesis is meant to explore the modalities by which MUD users avoid the drawbacks mentioned above, specifically, how nonverbal communication takes place in a virtual world composed solely of words. Background History of network computing The first computer network was created in the late 1960s in an effort by the Department of Defense to link multiple command sites to one another, thus ensuring that central command could be carried on remotely, if one or several were disabled or destroyed. Once the hardware was installed, the military allowed educational institutions to take advantage of the research resources inherent in multiple site networking. This interlaced network of computer connections spread quickly, and in the early 1980's, the network was divided into MILNET, for strictly military uses, and ARPANET, which, with the advent of satellite communications and global networking, became the Internet (Reid, 1993). On a smaller scale, throughout the 1970's, various corporations developed their own computer networks for intra-organizational interaction. E-mail and computer conferencing were created, useful for information exchange, but asynchronous (i.e., messages are stored for later retrieval by other users, rather than the synchronous co-authoring of messages) and thus less interpersonal than MUDs would later become. At the same time as this conferencing research was being done, another group of programmers was involved in the creation of text-based adventure games in which a user would wander through a textually-depicted maze, occasionally encountering programmed foes with whom to do battle. These first single user adventure games, developed in the early 1970's, expanded the world's notion of computers from mere super-cooled punch-card-munching behemoths to a more user-friendly conception of computers as toys and even friends. Inevitably, the networking technology and the game technology crossed paths. In 1979, Richard Bartle and Roy Trubshaw developed the first MUD (called "MUD", for Multi-User Dungeon; now, the term MUD is commonly accepted as a generic term for Multi-User Dimensions of many varieties) at Essex University. This original game became enormously popular with the students at Essex, to whom its use was restricted at first. As various technological barriers were toppled, access to "MUD" was granted to a widening circle of users in the United Kingdom, which eventually prompted two results. First, several of the "MUD" players wrote their own variations of the game. Second, the computer games magazines took note and produced a flurry of articles about "MUD" in the early 1980's (Reid, 1993, Bartle, 1990). These two results are related in that they brought about an exponential growth in the Multi-User Dimension community. By 1989, there were quite a few families of MUD programming technology, each designed with different goals in mind. Many of these technologies sought to distinguish themselves from their brethren by adopting new acronyms (as well as new programming approaches), such as MUSH (Multi-User Shared Hallucination), MUSE (Multi- User Simulated Environment), MOO (MUD, Object-Oriented), DUM (Depend Upon Mud (forever)), MAGE(Multi-Actor Gaming Environment), and MUCK (Multi User C Kernel). At the time of this writing, there are an estimated five hundred publicly accessible MUDs (Turkle, 1995, p. 11). There also exist an unknown number of private MUDs, and commercial "pay-for-play" MUDs. These numbers change from week to week, as MUDs die out for various reasons quite frequently (e.g., a MUD running on a university computer may suddenly lose the right to do so -- especially if the university was not informed of such use). Indeed, "large MUDs can be opened from scratch by spending a few hours with FTP," (Koster, 1996), and hence can expire shortly thereafter due to lack of interest. However, many MUDs survive for years, as evidenced by such hugely popular MUDs as Ancient Anguish, DragonMUD, and LambdaMOO, each of which boasts over seven thousand participants. It must be noted, however, that even though the rate at which people come on and stay on the Net is increasing, and shows no signs of slowing (Sellers, 1996), MUDs have remained as one of the least-frequented portions of the Internet. Even with articles published in such mainstream publications as Time (September 13, 1993), The Atlantic (September 1993), The Wall Street Journal (September 15, 1995), MacUser (November 1995), Technology Review (July 1994), and The Village Voice (December 21, 1993), even the most cyber-savvy of citizens has likely not experienced a MUD. There are several reasons for this. First of all, MUDs have been rather insular, almost underground, in their marketing; there is a single USENET newsgroup dedicated to the announcement of new MUDs (rec.games.mud.announce). For the uninitiated, this sole advertising space is quite obscure, if not invisible. As such, it is common for people to be introduced to MUDs simply by word of mouth, a diffusion method that has met with limited success. Among people who have heard of MUDs, many assume that they are simply wastes of time (indeed, MUDs can devour time like few other activities). Another factor for new users is the fact that the graphical interface is the Internet industry standard now; if there's not a multi-colored icon to click on, many recent Internet users will pass it by. As such, it may turn out that the graphical MUDs currently under development will become the dominant paradigm for real time chat and adventure games in the years to come. Finally, there is a steep learning curve involved in becoming acquainted with one's first MUD, including such hurdles as Unix, telnet, the initial login screen, the hundreds of available MUD commands, the local MUD culture, etc. Previous studies of text based virtual realities: The current body of communication research on MUDs is scarce, though growing steadily. Carlstrom's (1992) sociolinguistic study examines the popular MUD LambdaMOO, and points out several notable differences between MUD communication and real life communication, including issues of proxemics, turn-taking, and the uses of silence. Lynn Cherney at Stanford University has produced a wealth of important linguistic studies, such as her (1994) analysis of gender-based language differences as evidenced on one MUD, and a (1995a) study of the objectification of users' virtual bodies on MUDs. Another article (Cherney, 1995b) points out the details involved in MUD communication backchannels, implicitly satisfying Kiesler's query, "Consider the consequences if one cannot look quizzically to indicate if the message is confusing or ... nod one's head or murmur 'hmm' to indicate that one understands the other person," (Kiesler, Zubrow, & Moses, 1985, p.82). Finally, Cherney's (1995b) effort examines the modal complexity of speech events on one MUD, and suggests a possible classification system for MUD nonverbal communication, including conventional actions, backchannels, byplay, narration, and exposition. Michael Holmes is another scholar who has recently contributed to the literature on MUDs. His (1994) study of MUD environments as compared to Internet Relay Chat (and other similar "chat" utilities) concluded that the chat services "supply a stark context for conversation", while MUDs furnish "a richer context intended to model aspects of the physical world," (Holmes, 1994). Similarly, his (1995) examination of deictic conversational modalities in online interactions sheds light on such curious observed utterances as "Anyone here near Chicago?", (Holmes, 1995). Owen (1994) worked with identity constructions spawned by the chat utilities of the world's largest commercial Internet provider, America Online (AOL) and posits the frequent appearance of self-effacing attribution invitations in online conversations. As the number and extent of the uses of computer mediated communication (CMC) have grown exponentially in the last two decades, the communication discipline has produced a body of literature examining the interpersonal effects of such interaction. Some such studies purport that CMC is necessarily task-oriented, impersonal, and inappropriate for interpersonal uses (see Dubrovsky, Kiesler, & Sethna, 1991, Dubrovsky, 1985, Siegel, Dubrovsky, Kiesler, & McGuire, 1986). This effect is brought about by a lack of media richness, and is sometimes called the "cues-filtered-out" perspective (Culnan & Markus, 1987). In other words, restricting interlocutors to the verbal channel strips their messages of warmth, status, and individuality, (Rice & Love, 1987). However, as Walther, Anderson, and Park point out in their excellent (1994a) meta-analysis of published CMC studies, when provided with unlimited time, CMC users gain familiarity with the tools at hand, and communication becomes much more sociable, indicating that "the medium alone is not an adequate predictor of interpersonal tone, ", (Walther, 1995, p. 11). Walther even posits the existence of what he calls "hyperpersonal" communication, "CMC which is more socially desirable than we can achieve in normal Ftf [face to face] interaction,", (Walther, 1995, p.18). This phenomenon stems from three sources. First, CMC interlocutors engage in an over-attribution process, attributing idealized attributes on the basis of minimal (solely textual) cues. In fact, Chilcoat and Dewine (1985) report that conversants are more likely to rate their partner as attractive as more cues are filtered out. (Their study compared face to face, video conferencing, and audio conferencing, and the results were exactly the opposite of their hypotheses.) Second, CMC provides users with an opportunity for "selective self-presentation" (Walther & Burgoon, 1992), since the verbal channel is the easiest to control. Finally, certain aspects of message formation in CMC create hyperpersonal communication in that one has time to formulate replies and analyze responses to one's queries, a luxury denied, or at least restricted, in face to face dyads. A considerable number of papers and projects concerning MUDs has been produced within other disciplines. For instance, sociologist Reid (1994) examines a MUD as a cultural construct, rather than a technical one, and addresses issues such as power, social cohesion, and sexuality. Serpentelli (1992) examines conversational structure and personality correlates in her psychological study of MUD behavior. Likewise, NagaSiva (1992) treats the MUD as a psychological model, but draws on Eastern philosophy, and discusses MUD experiences as mystical experiences. Young (1994) embraces the textuality of MUD experience as postmodern hyperreality, a rich new hybrid of spoken and written communication. Numerous articles have been produced within the Computer Science discipline, many of which are of a non-technical nature, most notably Bartle (1990), whose experience as the co-creator of the first MUD makes him uniquely qualified as a commentator, Curtis (1992), another noted innovator in the field (and perhaps the original author of the phrase "text-based virtual reality"), and Bruckman (1993), whose extensive work on socio-psychological phenomena in MUDs at MIT has earned her deserved respect. Finally, Turkle's (1995) important new book examines numerous MUD- relevant topics, including artificial intelligence and "bots" (MUD robots), multiple selves and the fluidity of identity ("parallel lives"), and the effects of anonymity. She points out the psychological significance of role (game) playing, and reminds the reader that the word "persona" comes from the Latin word referring to "That through which sound comes", i.e., the actor's mask. Through MUDs and other forms of CMC, she believes that people can learn more about all the various masks people wear, including the one worn "in real life". Recent innovations: While the original "MUD" began a tradition of games with monster-slaying and treasure acquisition as their primary goals, the advent of the MOOs, MUSHes, MUSEs, and perhaps most notably, Jim Aspne's TinyMUD in 1989, brought about a new thinking in the purpose of Multi-User Dimensions. Rather than utilizing commands such as "wield sword" and "kill dragon", participants in these "social MUDs" use the virtual environment as a forum for interpersonal interaction and cooperative world creation. At the same time as these text-based virtual environments were rapidly multiplying, an arguably more ambitious project was well underway in Japan. Known as "Habitat", it was (and is) a "graphical many-user virtual online environment, a make-believe world that people enter using home computers...", (Farmer, Morningstar, & Crockford, 1994, p. 3). The creators of Habitat soon discovered that a virtual society had been spontaneously generated as a result of their efforts. One of the creators claims, This is not speculation! During Habitat's beta test, several social institutions sprang up spontaneously: There were marriages and divorces, a church (complete with a real-world Greek Orthodox minister), a loose guild of thieves, and elected sheriff (to combat the thieves), a newspaper (with a rather eccentric editor), and before long two lawyers hung up their shingle to sort out claims. (Farmer, 1989, p. 2) As these various MUD environments have developed, each with their own particularities of culture, a number of categories have emerged. Social MUDs have become virtual gathering places for people to meet new friends, converse with old ones, get help on their trigonometry homework, play "virtual scrabble", and assist in the continuing creation of the virtual environment. Some MUDs are known for their risque activities. On FurryMUCK, players assume the identity of various animals and have "mudsex" with one another, a rapid exchange of sexually explicit messages. Professional and educational MUDs have begun to appear recently with more "serious" uses in mind -- their aim is to provide a virtual spatial context (e.g., conference rooms, lecture halls, and private offices) for the participants therein, and even the creation of various pedagogical devices within the environment. A few MUDs have been set up as havens for virtual support groups for people with common misfortunes or interests. The most popular variety of MUD, though, harkens back to the philosophy of the original "MUD", involving puzzle-solving, dragon slaying, and treasure accumulation. It is these "adventure-style" MUDs which shall be the topic of inquiry for the remainder of this thesis. While it may be argued that the social MUDs, with interpersonal interaction as their participants' sole goal, would be more suitable, it is precisely because of this goal that adventure MUDs have been selected. It stands to reason that the communicative phenomena to be found on purely social MUDs may be even more firmly entrenched than on adventure MUDs due to the wealth of additional cultural cues which such environments spawn. Therefore, it is important to demonstrate that 1) virtual cultures develop on adventure-style MUDs, 2) that these cultures are quite real to the participants therein, and 3) that nonverbal communication occurs in these worlds designed with point accumulation in mind, and created solely by words. Adventure MUDs While a few "pay MUDs", i.e., MUDs which charge for access, do exist (and claim to be more dynamic and carefully programmed), the vast majority of adventure MUDs are created and maintained by volunteers. These volunteers are often computer science majors at major universities who have access to the hardware needed to run a MUD and make it accessible to multiple users at once. Once the hardware is in place, a "mudlib" must be decided upon. A "mudlib" is the most basic code that makes the MUD run, i.e., the code that defines the mechanisms by which the spatial metaphor is created, defines the difference between living and non-living objects, and calculates the formulae involved in combat. Beyond the technical distinction of which mudlib a MUD runs on, the next most distinctive feature is probably the theme which guides the builders (i.e., the people who actually program the objects in the MUD - every room, monster, weapon, etc) in their creation of the MUD. The first MUDs were most commonly based on a Tolkienesque world of hobbits and giants, swords and sorcery. Now that the MUD community has expanded, however, diverse themes can be found, such as MUDs based on Star Trek, Star Wars, and other popular fantasy genres. Some MUDs (mostly social MUDs) are simply set in American cities, such as BayMOO (San Francisco) and Club Miami (Miami, FL). Other MUDs are not themed in setting, but in purpose; they exist as meeting places for people with common interests, such as support groups for zoophiles, or discussion groups for astronomers. Still other MUDs are set simply in a virtual representation of the administrator's home. (The WWW site http://www.mudconnect.com contains an extensive list of current publicly available MUDs). By far, however, the fantastical swords and sorcery adventure-style MUDs are the most popular among MUD players. As such, they have been developed perhaps more than any other, with a rich tapestry of literature from which to draw, and perhaps even attracting especially imaginative builders and players. It may be speculated that an additional reason that adventure- style MUDs are so popular is that the treasure and point gathering that takes place therein appeals to many computer enthusiasts' desire for mastery of technique and knowledge. Each adventure-style MUD (referred to as simply MUDs from now on, unless otherwise noted) has a primary dichotomy, often referred to as the "mortal/immortal" dichotomy. Simply put, the "immortals" are those participants who have access to the programming which makes the MUD run. "Mortals" do not. Though the colorful terminology may change from MUD to MUD, this split is sure to exist. It should be noted that this is a significant difference between adventure-style MUDs and purely social MUDs (most often based on MOO code), in which all members enjoy some access to the programming, and there fore the ability to create their own objects. Every MUD participant starts out as a "mortal". This entails no access to the programming language at all. That is, they receive all the textual descriptions of the virtual environment, but none of the underlying code that makes the MUD run. For the mortals, the spatial metaphor is reified through this limited access. They have no choice but to exist within the spatial metaphor and interact with the other characters and monsters therein. Most adventure MUDs offer their participants a range of classes, or professions, (such as fighter, thief, or necromancer), and races (fantastical things like ogres and elves). Besides being a colorful addition to the participant's virtual persona, these designations have various effects on the player's experience with the MUD. Ogres may be quite strong, but poor at spell casting. Mages may have an arsenal of spells at their disposal, but may be struck down easily when hit. These details become pertinent when one understands the "goal" of an adventure MUD. In the maze of rooms that makes up a typical adventure MUD, there reside various programmed monsters to be slain and puzzles to be unraveled. Players will typically spend much of their time dashing from room to room engaging in computer-moderated verbally described combat with these creatures. When successful in vanquishing these foes (success is determined in a large part by programmed attributes of the combatants, though player strategy plays a part), players may reap their bounty. Rewards such as equipment (which may aid the character in future battles or sold at the shop), or money (which may be used to purchase equipment), and other treasures may be found. Above all, though, the player of the adventure MUD seeks "experience points", which determine how powerful the character can become. When a sufficient quantity of experience points have been collected, the character may "advance a level", thereby increasing his or her mastery of combat, spell casting, or other skills. There are risks, of course, in such valorous activity. Every time a character enters into combat with a foe, there exists a chance of death. The severity of players' deaths varies from MUD to MUD. On some MUDs, characters may simply lose the treasures they have amassed during their session. On others, significant reductions in a character's quantified skill levels may occur, while on a few MUDs, death is quite realistic and harsh - the character is simply erased. Death is not a random occurrence on well-tuned adventure MUDs. Each character is a quantifiable distance from death at any given moment, often referred to as "hit points". Every time s/he is struck in combat (which proceeds quite rapidly, text scrolling across the player's screen), that number of hit points is reduced. When it reaches zero, the character dies. Since characters engage in combat often, and combat reduces hit points, there exists a need for healing, so that characters do not simply get weaker with each successive battle. On adventure MUDs, these biological needs are taken care of through the presence of pubs and restaurants from which one may buy various cocktails and foodstuffs, all of which contribute to a character's health. This virtual biology is extended in that characters can only eat and drink a certain amount before becoming satiated, after which they need to wait a short time before consuming again. Some MUDs even require that each character eat from time to time even if they do not require healing - they get hungry. Besides food and drink (which cost gold coins), there exist healing spells which certain classes of character may cast. This is just one of the ways that interaction between characters is spawned on MUDs. If one character is injured and knows that a healer is connected to the MUD at the time, s/he may seek the healer out and ask for help, perhaps even offering something in exchange. Some MUDs, for instance, require material components for spell casting (eyes of newt, and so forth), thus providing non-spell casters with some bargaining power. An additional source of interaction between players is the guild system. While each character has a "class", or profession, which determines what proficiencies they have, guilds are more like social organizations. A guild could be based upon traditional notions of chivalry, or black magic, or the love of chocolate, or anything else that the creators decide. Guilds generally have a private location for guild members to congregate and interact, and perhaps a few specialized signs or signals that they use to recognize one another. Guilds often provide an additional reason for interaction, even to those players most interested in accumulating experience points. Many MUDs allow characters of sufficient experience the opportunity to ascend into the ranks of the "immortals", or those individuals with some degree of access to the actual programming that makes the MUD run and the power to create and manipulate objects therein. For the immortals, combat skills are completely irrelevant; they can simply erase any (non-player) foe in their path. As such, the very nature of the environment is completely different for them. Within the Immortal group, there are several levels of access to the programming, each with its own colorful moniker. The hierarchy outlined below is based roughly on the author's acquaintance with two popular MUDs, Ancient Anguish (described at length in Masterson, 1995) and Paradox II (development of this hierarchy described in part in Masterson, 1995b). The lowest level of Immortals includes the Builders, Wizards, or Creators. This group of individuals consists generally of those players who have reached a certain level of expertise and experience, and have been granted limited access to MUD code. They are generally given a directory (MUD syntax is much like the Unix operating system) in which they can write and edit files which may create objects in the MUD. It is this group of immortals whose responsibility it is to continue the creation and expansion of the virtual geography of the MUD. It is also generally the largest group of immortals. Various other groups of immortals are responsible for overseeing the activities of the wizards and the players. A common division involves one person (often called an "arch") to determine if the areas (this term includes the monsters and objects therein, as well) that the wizards are making are of sufficient quality (imaginatively described and comprehensively coded) to install in the game for players to enjoy (the "QC" or "Approval Arch"). Another arch might be responsible for ensuring that the areas all are smoothly integrated into the milieu of the MUD, and that there are neither areas in which players will suffer grave misfortune for little reward nor areas from which players stagger home with loads of treasure with little risk (the "Balance Arch", or "World Arch"). Another Arch may be responsible for ensuring that the underlying code that governs combat, character death, and interaction of objects runs smoothly (the "Mudlib Arch"). Finally, there is usually an arch who's responsibility it is to ensure a fair and equitable environment for the wizards to code in and the players to adventure in; in other words, and individual responsible for the upkeep of the rules of the MUD (the "Law Arch"). Though this scheme is by no means the only way that adventure MUDs govern themselves, it is quite common. All of the arches will have greater access to the programming than do the wizards. The individuals who occupy the top tier of the adventure MUD immortal hierarchy are known as the Admins (administrators). This group of individuals is endowed with the ultimate responsibility for maintenance and the upkeep of the MUD. They have access to every file that comprises the MUD. Mortal concerns are outside the scope of their responsibilities. The issue at hand A common descriptive metaphor in the literature of nonverbal communication states that "We don't need to be told we are at a wedding." In other words, our nonverbal communication provides essential contextual cues, moment by moment, which help us and others to make sense of our interpersonal situation. Just as a picture may take the place of a thousand words, so too may a gesture. It can be seen from the preceding section that there are numerous attributes of MUDs that give rise to interaction between participants. This interaction brings about a sense of community among participants on a given MUD. Indeed, some people get quite passionate about their membership in the "MUD-family", and connect to the MUD for as many as 80 hours a week, which is testimony to MUD conversations' compelling interactivity. Given that this is the case, though, how is it that in virtual c f:\12000 essays\technology & computers (295)\NOVEL 3 12.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Novel 3.12 Inleiding Netwerken hebben hun nut op scholen en bedrijven bewezen in de loop der jaren en komen steeds veelvuldiger voor. De netwerken worden groter, krachtiger en dus ook gecompliceerder. Om ze te kunnen beheren moet men ze minstens kunnen installeren en begrijpen. Novel's eigenschappen Novel is een volledig 32 bits multi-task omgeving en dus alleen installeerbaar op een 386 en hoger, dit is wel nodig want novel netware 3.12 onderstuund tot 250 users en om dit alles te verwerken moet novel optimaal gebruik maken van de 32 bits processor kracht. Verder bied novel een goede beveiliging op toegangs gebied en data beveiliging Op toegangs gebied kan dit door middel van login rechten, file rechten etc. Op data beveiligings gebied moeten we denken aan het "mirroring" en "duplexing" van 2 harde schijven, UPS controle (UPS= Uninterruptible Power Supply) Netwerk management: remote console service, dit wil zeggen dat je op een werkstation taken kunt uitvoeren alsof je op de server werkt en MONITOR utilitys waarme je alle activiteiten op het netwerk kunt waarnemen. Novel's modulaire opbouw maakt het mogelijk om tools te laden en ontladen als de server nog loopt en om eigen programma's of verbeteringen los toe te voegen Novel draait nu onder UNIX, OS /2, DOS en Macintosh en kan met deze ook onderling communiceren Ge gobale verbeteringen zijn weergegeven in de onderstaande tabel. SpecificationNetWare v2.2 NetWare v3.12 Hard disks per volume 1 32 (or 16 if mirrored) Volumes per server 32 64 Volumes per hard disk 16 8 Directory entries per volume 32,000 2,097,152 Maximum volume size 255 MB 32 TB Maximum file size 255 MB 4 GB Maximum addressable disk storage 2 GB 32 TB Maximum addressable RAM 12 MB 4 GB Maximum volume name length 15 characters 15 characters Maximum directory/file name length 14 characters 12 characters (DOS format) Name space support DOS, Macintosh DOS & Windows, Macintosh, UNIX, FTAM, OS/2 Disk block sizes 4 KB 4 KB, 8 KB, 16 KB, 32 KB, 64 KB Voor het installeren Voordat daadwerkelijk met het installeren kan worden begonnen moet uiteraard aan een aantal eisen worden voldaan. Het systeem moet aan de hardware eisen voldoen en correct zijn geinstalleerd, de stroom voorziening moet goed geregeld zijn, het systeem moet minimaal 4 Mb geheugen bevatten en 50 Mb voor de systeem files van novel en dos en een netwerk kaart (en kabel) zijn geinstalleerd Om het benodigde geheugen goed te berekenen is een formule: 4mb voor de standaard drivers en install module + 2mb voor netwerk 'add on' zoals printer modules en andere standaard modules + 0,008 x harddisk grootte + 1 - 4 mb ram als cache De netwerk kaart is meestal een ne-2000 of compatibele kaart en zal meestal op adress 300 en irq 3 staan, dit dien je voor het installeren te weten omdat hiernaar wordt gevraagd voor novel. Het installeren Installeren kan op 3 manieren en wel, van disk, van CD-ROM of van een netwerk directory. Van disk of CD-ROM dient men eerst 3 SYSTEM schijven aan te maken met drivers en novel files. Van netwerk gaat echter wat simpeler Eerst maak je een partitie van 10Mb voor MS-DOS en formatteert deze met systeem bestanden. Herstart de server opnieuw en maak een directory server aan met MD SERVER. Vervolgens log je in op de fileserver waar de bestanden zich bevinden en kopieer de orginele system_1,2 en 3 disk naar c:\server (dit kan zo ook van disk of CD) Run van de install disk programma INSTALL. Dit zal disk SYSTEM 1 2 en 3 gebruiken. Vervolgens kunnen we verder werken in novel zelf door in de directory server ,SERVER.exe op te startten of in de installatie SERVER.EXE in de autoexec.bat laten zetten Nu bevinden we ons in novels eigen besturings systeem Nu moetten we de server een naam geven. Dit mag tussen de 2 en 47 tekens lang zijn; bijvoorbeeld: *Fileservername: TER_AA Hierna moetten we een IPX nummer invullen. Voor de server is dit altijd 1. *IPX internal network number: 1 Om de SCSI of ISA controller aan te sturen voor CD-ROM etc dienen we de bijgeleverde driver in te laden met het commando LOAD xxxxx.xxx (waarbij xxxxx.xxx de naam van de driver is). In ons geval LOAD ISADISK Nu kunnen we verder gaan met de installatie. We geven het commando LOAD INSTALL Na LOAD INSTALL komt er een menu op het scherm : INSTALLATIONS OPTIONS MENU disk options volume options system options volume options exit We kiezen DISKOPTIONS en vervolgens PARTITION TABLES nu kiezen we voor CREATE NETWARE PARTITION. Hier neemen we meestal de standaard waardes en drukken op OK. Nu zal de computer vragen of hij deze zal aanmaken Yes/No, Yes uiteraard om verder te gaan. De partitie wordt aangemaakt zonder te hoeven formatteren. Als je meer als een harde schijf hebt kun je hier mirroren en duplexen. Dit dienen wel 2 dezelfde harde schijven te zijn met de zelfde grootte of zelfde grootte partitie Terug in het hoofd menu kiezen we voor VOLUME OPTIONS Je mag tot 64 volumes maken waarvan en één SYS: MOET heten. Dit volume is voor de SYSTEM, PUBLIC, LOGIN, en MAIL directories. Door op INS te drukken kunnen we een volume toevoegen en we maken hier het volume SYS: aan en eventueel nog meer volumes. Hier kan ook de block grootte worden ingesteld op 4, 8, 16, 32, of 64 KB. Grote block size is beter voor grote database bestanden kleinere block size bespaard ruimte als je veel kleine bestanden beheert. Terug in het hoofdmenu kiezen we SYSTEM OPTIONS EN COPY SYSTEM AND PUBLIC FILES. Nu geeft novel een melding: Insert disk "Netware 3.12 install diskette in drive A Maar omdat we hier mischien niet van disk installeren kan je met F6 een alternatieve drive of directory aangeven waarna novel begint met copieren. Als novel meldt File upload completed. Nu dienen we de drivers voor de netwek kaart etc. te laden. Omdat novel multitasking heeft hoeven we het programma niet te verlaten maar kun je met [Alt + ESC] schakelen van task,over naar de Novell prompt en geven we het commando LOAD NE2000 (of andere driver) en novel geeft een melding terug: loading NE2000.LAN autoloading ETHERTSM.NLM (Topology Support Module) autoloading MSM31.X (Media Support Module) Om de netwerk kaart aan de novel IPX module te koppelen moet het commando BIND IPX TO NE2000 gegeven worden waarna novel weer met een melding komt: Network number: 1 IPX lan protocol bound to Novell NE3200 Bij network number vul je meestal 1 in tenzij je als een ander netwerk bezit, dan nummer je omhoog [Alt + ESC] schakeld task over terug naar het installatie menu. We kiezen voor CREATE AUTOEXEC.NCF FILE fileservername TER_AA ipx internal net 1 load NE2000 slot=6 frame=ethernet_802.3 bind IPX to NE3200 net=1 Let op: frame=ethernet_802.2 voor VLM (Virtual Loadable Modules) frame=ethernet_802.3 voor IPX (Inter-Packet Exchange) (IPXODI voor muli OS systemen) dus verander indien nodig 802.2 in 802.3 We saven deze en kiezen nu CREATE STARTUP.NCF FILE Hierin staat de SCSI/IDE driver, b.v.: load ASADISK Om verder te werken om bijvoorbeeld een printer server te maken dienen we op een werkstation verder te gaan. Voordat we kunnen inloggen vanaf een werkstation dienen we een passwoord aan te maken. Met [Alt + ESC] Schakelen we van task over naar de Novell prompt en typen LOAD RSPX à loading RSPX.NLM (Remote Console SPX driver) autoloading REMOTE.NLM (Netware Remote Console) enter new password for remote console: TER_AA (het passwoord) Om alle settings te laten werken en te controleren of de .CNF files goed zijn sluiten we de server af met het command DOWN en keren terug naar DOS met EXIT. Vervolgens startten we de fileserver op door dfe commando's C:\>CD\SERVER C:\>SERVER Het maken van een printer server De server is opgestart, en we gaan naar een werkstation en loggen in op de fileserver TER_AA. In F:\SYSTEM> typen we in PCONSOLE. Een menu verschijnt en we kiezen PRINTSERVER INFORMATION en vervolgens PRINTSERVERS. Hier staan geen printerserevers maar door op [INS] te drukken kunnen we een printerserver toevoegen. à New printservername:PSERV Je kiest met de cursor Printserver Pserv en geeft RETURN en gaat naar PRINTER CONFIGURATION. HIER ZIE JE STAAN ONDER Configured printers à Not installed 0 Je geeft een return om er 1 toe te voegen een een lijst verschijnt met opties: name printer 0: HP-Laserjet_4M (naam van een printer) type parallel LPT1 drukt op [esc] en geef bij SAVE? Yes Nu moeten we terug naar AVAILABLE OPTIONS en kiezen PRINT QUE INFORMATION, PRINT QUE'S. Wederom [ins] om een que aan te maken. Geef de Que een duidelijke naam: NEW PRINT QUE NAME: LASERJET Om een que nu toe te kennen aan een printer server gaan we in het hoofd menu naar PRINTSERVER INFORMATION, kiezen PSERVals server Printserver configuration [Return] Ques serviced by printer LPT1 [Return][INS] available queues COM1 [Return][INS] available queues - [ESC][ESC] - Login als USER/GUEST - Maak een job aan F:\>PRINTCON edit printjob configurations [INS] new name LPT1_J [Return] Melding: no form defined on server [ESC] - Zet printque op LPT1_Q - Zet printbanner op NO - [ESC] Save? Yes - Exit printcon - Zet nu in AUTOEXEC.NCF load RSPX load pserver pserv Om verder te gaan dient eerst een schema gemaakt te worden voor de file structuur op de fileserver. De eerste en essenciele directory's zijn al aangemaakt door novel. De meeste file server zien er als volgt uit. Hierna moetten de USER GOUPS en USERS gemaakt worden. Als de server wordt geinstalleerd worden de users SUPERVISOR en GUEST en degroep EVERYONE automatisch aangemaakt. Door users rechten toe te kennen kun je hun toegang beperken of vergroten. Meestal zijn er mensen met extra rechten naast de SUPERVISOR die een gedeelte van het netwerk beheren bijvoorbeeld als in het onderstaan schema. Deze personen krijgen exta rechten over een bepaalde groep om deze te beheren. De supervisor kan op zijn beurt deze weer beheren. Door GROUPS aan te maken kun je users tot een groep toevoegen waaraan een bepaald aantal rechten, programma's en menu is toegewezen zodat je niet bij elke nieuwe user nieuwe menu's hoeft te maken of alle rechten hoeft in te stellen en om zo ook de overzichtelijkheid te behouden. Met het programma SYSCON kun je dit alles instellen. Een user maak je aan door bij USER INFORMATION op [ins] te drukken en een naam in te vullen. Door vervolgens op deze nieuwe user te drukken kun je zijn of haar rechten bepalen. Als SUPERVISOR krijg je het volgende menu te zien: In dit menu kun je rechten toekennen, toevoegen aan groepen, login tijden toekennen etc. Om groepen aan te maken kies je GROUP INFORMATION in het hoofd menu en weer met [ins] kun je een groep toevoegen een een naam geven. Aan een groep kun je directory's toekennen met daaraan gekoppeld de rechten die users daarin hebben volgens onderstaand schema kun je deze (met [ins]) toevoegen. Letter Rechten S Supervisory R Read W Write C Create E Erase M Modify F File scan A Access control Vervolgens kun je USERS toevoegen aan een GROUP of een GROUP toekennen aan een USER. LOGIN SCRIPT Voor elke USER kun je een login script schrijven. Met een login script kun je: Drives mappen en search drives aan directories kop[pelen. (mapping = soort subst van novel) Boodschappen weergeven. Systeem variabelen instellen (Tijd, zoekpath etc.) Programas of menus runnen. Je hebt 3 verschillende soorrten loginscripts: System login script: Stelt de primaire setting in voor alle users (komt als eerste) User login script. Stelts omgeving in voor een user, bijvoorbeeld een menu options of een username voor electronic mail. Dit script komt na het system login script. Default login script.(Een gedeelte van de LOGIN.) Dit script wordt uitgevoerd als de SUPERVISOR de eerste keer inlogd. Hierin staan de belangrijkste SEARCH MAPS en NOVEL NETWARE UTILITIES. Een script kun je bewerken onder USER INFORMATION als je genoeg rechtten bezit. Conclusie: Novel is een ZEER gecompliceerd pakket. De hiergegeven beschrijving is een simpele installatie methode om mee te beginnen. Om verder het netwerk beter te laten werken is veel meer kennis nodig. Stijn Peeters 10-6-1996 f:\12000 essays\technology & computers (295)\Now is the time.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Now is the time to become computer literate. Now is the time to become familiar and comfortable with the computer because in the future we will become virtually a paperless society and many daily activities will be linked to the computer. Mail delivery to the home and business will be almost entirely phased out and e-mail will replace it. Bills will come via the computer and paid the same way. Pay checks will be electronically deposited to your bank account. On special occasions such as birthdays, greeting cards will be sent from your computer to your loved ones computer. Shopping malls will become cyber malls and we will do our shopping via the computer. You will be able to view on your monitor how you would look in a certain outfit you are considering to buy. Imagine traveling over the entire mall in a comfortable in front of your computer. Push a button and the entire stock of a store will be at your finger tips. When you do go to a store to shop you will not use money. You will use either a credit card or debit card which will automatically deduct the amount if you purchase from your bank account. Our homes will be run by computers. Computers will adjust the temperature. Home appliances will be linked to the computer. Imagine driving home from work and calling your computer and having it start dinner for you. Have it adjust the temperature so your home will be a comfortable temperature when you arrive. Window covering will be adjusted to allow the correct amount of sunlight in. Light fixtures will automatically adjust to the right level of light in your home. The way of business conducted be entirely changed. Instead of long distance business trips, business will be conducted via interactive tele-conferences. Documents and files will be stored on computers hard drives. Much of this is done today but in the future it will expand as we become a paperless society. Many workers will not have to go to a place of employment. They will work from their homes via the computer. For those who do have to drive to work it will become less stressful as computer help to keep traffic congestion down. Cars will have on board computers to keep them aware of road conditions, traffic backups, and which route is best to take. On board computer will also replace maps and give directions from your current location to where you are going. If you happen to get lost your computer will get you back to the correct road. The education system will also join the computer age. Every student will have access to a computer. Text books will be on disks. Students will have access to a vast amount of reference material via the computer modem from far away universities and other institutions. Home work will done on the computer. Instead of turning in papers on which you have done your homework you will either turn in a disk or send it to your teachers by a modem. Teachers will no longer have to spend hours grading papers. Homework and in class work will be graded by the computer. Test will be taken on the computer and you will know as soon as you finish you will know what your score is. At the end of the grading period your teacher will just punch a few keys on her computer and your report cards will print out as the computer keeps track of all your grades for the quarter. Some classes will be conducted by interactive teleconferences much the same business conferences are conducted. This will give students in small schools the same educational opportunities as those in the larger school systems. Our leisure time will also be affected by the expanded use of computers. In the future the home communication system { phones, e-mail, faxes, and modems} and tv service will be integrated into one system. If you want to read the newspaper you will not have to travel to the driveway to pick it up. Just flip on your tv and with the aid of your computer pull up the paper on your screen and read. Magazines will be available the same way. If you want to watch a movie and just turn on the tv and you will receive a list of what is on. Order by the computer and sit back and enjoy the movie. Video games will be available to play on your tv the same way. People whose hobbies are collecting things such as card or stamps can receive the latest information on their collections from the computer. Find yourself putting on a few extra pounds spending all your time in front of your computer system. You can get exercise programs and a computer generated diet geared to your specific needs from your computer So to be a productive person in the future you will need to prepared for the future so NOW IS THE TIME! f:\12000 essays\technology & computers (295)\Optical Storage.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Optical Storage MediumsJames Ng The most common way of storing data in a computer is magnetic. We have hard drives and floppy disks (soon making way to the CD-ROM), both of which can store some amount of data. In a disk drive, a read/write head (usually a coil of wire) passes over a spinning disk, generating an electrical current, which defines a bit as either a 1 or a 0. There are limitations to this though, and that is that we can only make the head so small, and the tracks and sectors so close, before the drive starts to suffer from interference from nearby tracks and sectors. What other option do we have to store massive amount of data? We can use light. Light has its advantages. It is of a short wavelength, so we can place tracks very close together, and the size of the track we use is dependent only on one thing - the color of the light we use. An optical medium typically involves some sort of laser, for laser light does not diverge, so we can pinpoint it to a specific place on the disk. By moving the laser a little bit, we can change tracks on a disk, and this movement is very small, usually less than a hair?s width. This allows one to store an immense amount of data on one disk. The light does not touch the disk surface, thereby not creating friction, which leads to wear, so the life of an average optical disk is far longer than that of a magnetic medium. Also, it is impossible to ?crashÓ an optical disk (in the same sense as crashing a hard drive), since there is a protective layer covering the data areas, and that the ?headÓ of the drive can be quite far away from the disk surface (a few millimeters compared to micrometers for a hard drive). If this medium is so superior, then why is it not standard equipment? It is. Most of the new computers have a CD-ROM drive that comes with it. Also, it is only recently that prices have come low enough to actually make them affordable. However, as the acronym states, one cannot write to a CD-ROM disk (unless one gets a CD-Recordable disk and drive). There are products however, that allows one to store and retrieve data on a optical medium. Some of those products are shown in table 1. However, the cost of this is quite high, so it doesn?t usually make much sense for consumer use yet, unless one loves to transfers 20 megabyte pictures between friends. One will notice on the table that there are some items labled ?MOÓ or magnet-optical. This is a special type of drive and disk that get written by magnetic fields, and read by lasers. The disk itself is based on magnetism, that affects the reflective surface. Unlike floppy disks, to erase such a disk at room temperature requires a very strong magnetic field, much stronger than what ordinary disk erasers provide. To aid in writing to this MO disks, a high-power laser heats up part of the disk to about 150 oC (or the Curie temperature), which reduces the ability for the disk to withstand magnetic fields. Thus, the disk is ready to be rewritten. The disk needs to passes to change the bits though. The first pass ?renewsÓ the surface to what it was before it was used. The second pass writes the new data on. The magnetic fields then alters the crystal structure below it, thereby creating places in which the laser beam would not reflect to the photodetector. Another type of recordable medium, is the one-shot deal. The disk is shipped from the factory with nothing on it. As you go and use it, a high-power laser turns the transparent layer below the reflective layer opaque. The normal surface becomes the islands (on a normal CD) and the opaque surface the pits (pits on a normal CD do not reflect light back). These CDs, once recorded, cannot be re-recorded, unless saved in a special format that allows a new table of contents to be used. These CDs are the CD-Recordable, and the Photo CD. The Photo CD is in a format that allows one to have a new table of contents, that tell where the pictures are. It is this that distinguishes between ?single-sessionÓ drives (drives that con only read photos recorded the first time the disk was used) and ?multi-sessionÓ drives (that can read all the photos on a Photo CD). To read an optical medium, a low-power laser (one that cannot write to the disk) is aimed at the disk, and data is read back, by seeing if the laser light passes to the photodetector. The photodetector returns signals telling if there is or is not light bouncing back from the disk. To illustrate this process, see Figure 1. Optical data storage is the future of storage technology. However, it will take some time before prices are low enough for the general public. Applications get bigger, data files get bigger, games get bigger, etc. The humble floppy disk, with its tiny 1.44 megabyte (actually, 1.40 megabytes... since disk companies like to call 1 megabyte 1,024,000 bytes, when it is actually 1,048,576 bytes, or 220 bytes) capacity will be no match for the latest and greatest game, requiring 2+ gigabytes of space (and such games to exist now... in 4 CD-ROMs), the hard drive will reach its capacity, while the optical drives get smaller, faster, and cheaper. The speed of optical drives today is appalling to say the least. Also in the future would be hard drives based on optical technology, since nowadays a 51/4 inch disk can contain as much as 1 gigabytes of data. Optical drives, with their high-bit densities are in the near future...Sources Used:UMI - May 1992 BYTE MagazineTOM - June 1992 PC Magazine (64J2528)CD-ROMs - Grolier?s multimediaPrinted - Various BYTE, ComputerCraft, MacUser and MacWorld magazinesInternet - Figure 1: http://www.byte.com/art\9502\img\411016E2.htm Table 1: http://www.byte.com/art\9502\img\411016Z2.htm f:\12000 essays\technology & computers (295)\Outsourcing.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Contents. 1 Abstract 2 Introduction 3 Fundamentals 4 The Main Strategy 5 Sucessful Outsourcing 6 Conclusion Outsourcing and how it can help IT Managers enhance their projects. Abstract With computer systems / projects and there implementations getting more complex with every day that passes , the tendering of IT responsibilities to external parties is becoming more and more attractive to the IT Managers of large organisations. The common name for this type of operation is "Outsourcing". It is the attempt of this paper to explain outsourcing , it's pro's and con's and how it can help our friendly IT Manager enhance developments or implementations. Introduction Outsourcing can be defined as a contract service agreement in which an organisation hires out all or part of its IT responsibilities to an external company. More and more companies are leaning towards outsourcing it could be said that this may be caused by the growing complexity of IT and the changing business needs of an organisation. As a result, an organisation may find that it is not possible to have all its IT services supplied from within its own company. Given this, an IT manager may decide to choose to seek assistance from an external contractor/company to supply their services the organisation lacks. In addition, the business competition has set the pace for an organisation to continue to strive for internal efficiency. It also needs to look for a way to transfer non-core activities or "in house" services and support activities to external specialist organisations who can deliver quality services at a lower cost. Fundamentals In deciding whether to use outsourcing or not, the main objective of outsourcing is based on the price of delivery of services by an external contractor/company. Although price of delivery is a primary factor for outsourcing, other issues should be considered e.g. price should be measured against the overall package offered by the external contractor/company. Briefly if it's a good competitive price in relation to the services rendered by the company and in respect to their skills/competency and experience, and timely delivery. The organisation also needs to consider outsourcing in light of its long term strategic directions and its information needs. Competition is a another area to be carefully considered. Competition opens up opportunity for all potential suppliers to conduct business with the organisation. Through the competitive process, it allows organisations/IT managers to derive the best outcome. From the open and effective competition, the organisation is then able to judge soundly in determining the best strategy after it has taken into account of the competition and value for money principle. IT managers can go through lengthy procedures to minimise problems with outsourcing, but still things can go wrong and intended objectives may not get achieved. To overcome such mistakes, it may be prudent to look at other companies that have undertaken outsourcing and learn from their successes and mistakes Listed below are some of the major issues to be considered when using outsourcing: · An IT manager that undertakes outsourcing must be able to clearly identify its long term IT strategic directions and long term information needs. · Organisation must be able to clearly define its business objectives. · To avoid unnecessary friction between the organisation and the external service provider, it would be prudent to incorporate an "extraordinary events" clause into any contract entered into. This clause should cover any extraordinary changes in circumstance that should occur . This also allows a lot of flexibility between the two parties. · The IT manager should identify all the external and internal stakeholders and the impact that the outsourcing may have on stakeholders. · Learn from other companies, use their mistakes and successes to avoid duplication and waste of manpower. · The IT manager should communicate regularly with anyone in the organisation who is affected by outsourcing, even if the affect is very small. · The IT manager should make sure that the external service provider should know exactly what is expected of them e.g. the exact services required. · The IT manager should allow adequate time and the correct resources to the problem at hand , this is to ensure the best possible outcome from the service arrangement. · The IT manager should assign skilled staff to manage the external contractor and to monitor closely the external contractors performance. · The IT manager should monitor and assess the contractor to ensure quality of the service not just price of the delivery of services. The Main Strategy In an organisation, the IT infrastructure components are comprised of a number of technical and service areas. Before going through any outsourcing decision process, the organisation needs first to assess its sourcing across the entire IT infrastructure. Once this is done, the organisation can then determine the best sourcing strategy against a number of perspectives. In order to determine the optimum sourcing strategy, an organisation needs to look at a number of perspectives or alternatives and then balance these perspectives with the benefits and risks of outsourcing. With this information, an organisation can derive a more structured methodology for a balanced view of the IT infrastructure and its components. It can be stated that there is no one approach to outsourcing. However, in practice there are three common methods used by the practitioners. They include: 1 Outsourcing a significant proportion of the IT services and technical areas. This approach has a lower co-ordination cost and also has a greater organisational impact; 2 Assessing each IT service and technical area independently. A number of vendors are used to match the needs of each outsourcing event. This approach selects the best vendor and deal for each outsourcing arrangement. However, it involves higher internal costs and synergy problem; 3 Selecting a prime contractor. The prime contractor can select and manage all other vendors. This approach depends on the importance of learning curve and therefore, it takes longer. As part of the determination of outsourcing strategy, it is useful for the organisation to incorporate any experience derived from other organisations that have outsourced and other forms of outsourcing that the organisation has undertaken. The organisation should also perform an initial investigation on the potential vendors background. Furthermore, the organisation should examine different kinds of outsourcing forms that the vendors are able to provide. The organisation must identify all the internal and external stakeholders and the impact that outsourcing may have on them and their objectives. The internal stakeholders include IT staff, users and management and, the external stakeholders include unions, customers, and existing suppliers (IT and non-IT). The IT manager should also undertake cost benefit analysis of all internal costs and external provisions. This provision include capital investment, ongoing expenses and the commitment of time and resources. Once a cost baseline is developed, an organisation can come up with a more objective cost analysis. It can then assess the related components of the vendor's proposal against this cost benefit analysis before making any decision regarding the outsourcing. Successful Outsourcing For an IT manager to successfully outsource its IT functions, there are a number of factors that need to be addressed. An organisation that has outsourced its IT functions to an external contractor, should not abdicate itself from the responsibility from the activity that it has outsourced. In other words, there is still a need for the organisation to retain overall control of its IT services being outsourced. In addition, the organisation needs to regularly monitor the external contractor to ensure that they continue to deliver quality service and to perform at the required standard as agreed in the contract arrangement. To be able to do this, the organisation must ensure that it can maintain sufficient technically competent "in house" staff to oversee the contract service agreement. Before an organisation outsources its IT functions, it is very important that it prepares a sound full cost estimate for all existing internal computer systems so that it can determine whether the outsourcing is cost effective. Failure to do so can be critical. The costing issue of Outsourcing is discussed in more detail in the section headed "The Economics of Outsourcing" For any successful outsourcing, a good solid contract is essential. The contract should also allow for flexibility as it is difficult, in the life cycle of the contract, to predict every circumstance or cover every eventuality. Successful outsourcing should be based on partnership between the organisation and the external contractor. Outsourcing an organisation's IT functions without proper consultation with employees can cause a lots of stress among IT staff and reduce their morale. The result may be a loss of some key technical and specialist staff from the organisation. A more open and timely communication with employees can minimise this impact and uphold the staff morale. Organisation can allay the fears by outlining career options and opportunities for its staff within and outside the organisation and also by explaining the benefits of outsourcing to those affected employees. The Economics of Outsourcing. There are many reasons a company may choose to outsource its software development function. These couple of paragraphs address the two main reasons for this action 1 The conception that outsourcing is cheaper 2 The expertise for developing the required software product does not exist within the company. In the past, it was difficult to compare the cost of outsourcing a software product against the cost of in- house development, mainly because there was no functional sizing metric agreed upon prior to the start of the contract. As function points grow in popularity and gain wider acceptance as an accurate measure of software size, more firms will be better equipped to compare outsourcing firms with in-house development teams. In all sophisticated industries cost per unit (or average cost) is an important consideration, where average cost is total cost divided by total output. The same concept can be applied to software development using function points. Total development cost divided by total function points is an average cost calculation. Once average cost is determined, all prospective developers, in-house and outsource, can be compared on an equal basis. Just as important is the ability to adequately evaluate the delivered product, considering several factors: size, quality, time to market, and so on. Using functional metrics total delivered function points can be contractually agreed upon prior to the start of the contract, assuming the company contracting the software development has clearly defined the final product. This is a dramatic change in the way software projects historically are managed. Any change in the number of total delivered function points once the project begins will impact the average cost calculation. Changes, additions, and even deletions to the software become more expensive per unit as you move through the development life cycle. Since the consumer of the custom built software wants to minimise unit cost, it is therefore in their best interest to sufficiently define requirements prior to the start of the project. The ability to compare cycle time, or time to production, is also important. Time to production is defined as total number of function points delivered divided by elapsed calendar time. The least expensive developer also may be the one whose delivery date is the latest out. The buyer of the software must decide if quicker time to production is worth the extra expense. The number of acceptable defects delivered per unit of size is another important evaluation metric; with higher quality comes higher development costs. But delivering software with numerous embedded defects will be expensive to maintain and will cost more in the long run. The considerations of outsourcing change dramatically when you view the relationship from the perspective of the outsourcing firm. It is important to the outsourcing firm that the average unit cost for software development be kept to a minimum. Fixed-price contracts create an environment that pushes average costs lower for subsequent projects. Outsourcing firms have a great incentive to maintain a software library: reuse of components in future projects. If this library is thoroughly tested, insuring that it is nearly defect free, and documented so it can be easily understood, it can be used with confidence to lower average costs over time. Additionally, outsourcing firms have a great incentive to keep their staffs trained in the latest software languages, tools, and techniques. As more outsourcing projects are undertaken, the responsibility to keep staff knowledgeable and up-to-date transfers from the in-house development team to the outsourcing firm. The outsourcing firm assumes the risk of investing in the technical staff - if their people are not trained on the latest software technologies, they cannot remain competitive with other outsourcing firms who have staffs with state-of-the-art skills. They willingly assume the risk with the expectation that these training initiatives will lower future average costs. Unfortunately, it is still the norm in the software development arena, and in outsourcing cases in particular, for an organisation to be ignorant of the average size and average cost of a software project. All other sophisticated industries calculate and monitor their per unit average cost. As the software industry continues to mature, not only will it be common practice to know average costs in dollars per function point, it will be required. Conclusion Outsourcing should not be viewed as a solution in resolving problem service areas within the organisations. If an internal service area is not performing effectively and by transferring it to an external contractor could only magnify the problem. Therefore, it is important that an organisation that undertakes outsourcing must be able to clearly identify its long term IT strategic directions and long term information needs. The IT manager is the prime candidate to fulfill this role . Once the organisations have understood and addressed its long term IT strategic directions, it can then go on to decide which IT service areas should be outsourced. Organisations undertake outsourcing of their IT service areas should do so based on the basis of costs and benefits analysis and it is justified on cost effectiveness and must be based on sound business decision. References Although many different books references and web sites were researched the following Instsitute yielded a most comphrehesive supply of information. That ultimately became the basis of this report. The author would strongly recommend any party investigating Outsourcing to contact the below institute. The Outsourcing Institute 45 Rockefeller Plaza, Suite 2000, New York, NY 10111 f:\12000 essays\technology & computers (295)\Paperless Office.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Paperless(?) Office 1. What are the advantages and disadvantages of the paperless office? There are many advantages to having a paperless office. One advantage is that companies are able to greatly reduce the amount of paper that they use. Not only does this help the environment, it helps cut costs within the organization. Companies are also able to improve service through implementing the paperless office. This is because communication is immediate and does not get lost in a pile of papers on someone's desk. A paperless office can also save the company money. This can be seen in the example of Washington Mutual Savings Bank of Seattle. The bank automated more than one-hundred different forms and estimates that they are saving upwards of one million per year. One disadvantage to having a paperless office is the issue security. How does a company make sure that only the eyes the document is intended for, are the only eyes that see it? Also how does a company know an electronic communication is authentic? Another issue is privacy. How does a company make sure that when an electronic communication is sent only the person it is intended for will read it? How does a company make sure private information does not make the evening news? 2. Are certain types of information more readily amenable to digital processing in a paperless office than others? If so, why; if not, why not? It would seem that some types of information are better in paperless form, while some are not. Implementing an e-mail system can do wonders for companies. The e-mail sessions allow managers to get more information across to the employees and vice versa. This is a way to make sure everyone will access to the same information. A paperless office is a good way to send and receive reports. Another area that is conducive to a paperless office is such companies that put large volumes of books and papers on CD-ROM. A single CD-ROM can hold a whole room full of books. This cuts down on the physical space a company must devote to paper storage. 3. How might book publishing change as the technology of the paperless office continues to develop? Will books become obsolete? Why or why not? The book publishing industry will have to grow and change in relation to the changing technology. As the paperless office gains more and more popularity, one will begin to see more and more documents being on CD-ROM and also on the Internet. The CD-ROM's are cost effective, paper reducing, and easy to manufacture. In the near future what will probably happen is that publications will be produced in both paper and in some type of electronic media. I see sort of phasing, similar to that if the cassette tape going to the compact disc. For the meantime most everyone has gone the way of compact discs, but there are still the ones who prefer cassette tapes for whatever reason. For this reason I don't think we will see the deletion of printed books, but we will begin to see more and more on some type of electronic media. f:\12000 essays\technology & computers (295)\Past Present and Future of computers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Imagine being able to do almost anything right from your own living room. You could order a pizza, watch cartoons, or play video games with people from around the entire world. All are possible today with your computer. The beginnings of the computer started off in a rather unique way. It was first used to produce intricate designs with silk, a task far to long a tedious for a human to do constantly. Itıs really unbelievable how the computers changed from that to what they are now. Today, computers are completely astounding. The possibilities are endless. Who knows where they will take us in the years ahead. The computer is the most influential piece of equipment that has ever been invented. The begginings of the computer are actually kind of strange. It started in the 1800ıs when a man named Charles Babbage wanted to make a calculating machine. He created a machine that would calculate logarithms on a system of constant difference and record the results on a metal plate. The machine was aptly named the Difference Engine. Within ten years, the Analytical Engine was produced. This machine could perform several tasks. These tasks would be givin to the machine and could figure out values of almost any algebraic equation. Soon, a silk weaver wanted to make very intricate designs. The designs were stored on punch-cards which could be fed into the loom in order to produce the designs requested. This is an odd beginning for the most powerful invention in the world. In the 1930ıs, a man named Konrad Zuse started to make his own type of computer. Out of his works, he made several good advances in the world of computing. First, he developed the binary coding system. This was a base two system which allowed computers to read information with either a 1 or a 0. This is the same as an on or and off. The on or off functions could be created through switches. These switches were utilized with vacuum tubes. The functions could then be relayed as fast as electrons jumping between plates. This was all during the time of the Second World War and further advancements were made in the area of cryptology. Computer advancements were needed in order for the Allied Coding Center in London to decode encrypted Nazi messages. Speed was of the essence, so scientists developed the first fully valve driven computer. Before this, computers only had a number of valves, none were fully driven by them because of the complexity and difficulty of producing it. Despite the odds, several Cambridge professors accomplished the mammoth task. Once it was built, the computer could decode the encrypted messages in enough time to be of use, and was an important factor in the end of World War II. The war also provided advancements in the United States as well. The trajectory of artillery shells was a complex process that took alot of time to compute on the field. A new, more powerful computer was in dire need. Working with the Moore School of Electrical Engineering, the Ballistics Research Laboratory created the Electronic Numerical Integrator and Computer. The ENIAC could compute things a thousand times faster than any machine built before it. Even though it was not completed until 1946 and was not any help during the war, it provided another launching pad for scientists and inventors of the near future. The only problem with the ENIAC was that it was a long a tedious process to program it. What was needed was a computation device that could store simple ³programs² into itıs memory for call later. The Electronic Discrete Variable Computer was the next in line. A young man named John von Neumann had the original plan for memory. His only problem was where and how could the instructions be stored for later use. Several ideas were pursued, but the one found most effective at the time was magnetic tape. Sets of instructions could be stored on the tapes and could be used to input the information instead of hand feeding the machine every time. If you have ever heard of a ³tape backup² for a computer, this is exactly what this is. All the information on your computer can be stored on the magnetic tape and could be recovered if your system ever crashed. Itıs strange that a method developed so long ago is still in use today, even though the computer today can do alot more than simply ³compute². The computer works in a relatively simple way. It consists of five parts; input, output, memory, CPU, and arithmetic logic unit. Input is the device used by the operator of the computer to make it to what is requested. The output display the results of the tasks created from the input. The data goes from the input to the memory then to the arithmetic logic unit for processing then to the output. The data can then be stored in memory if the user desires. Before the advent of the monitor, the user would have to hand feed cards into the input and wouldnıt see the results until it was displayed by the printer. Now that we have monitors, we can view the instant results of the tasks. The main component that allows the computer to do what is desired is the transistor. The transistor can either amplify or block electrical currents to produce either a 1 or a 0. Previously done by valves and vacuum tubes, the transistor allows for much faster processing of information. The microprocessor consists of a layered microchip which is on a base of silicone. It is a computer in itself and is the most integral part of the CPU in modern computers. It is a single chip which allows all that happens on a computer. Integrated circuits, a microchip which is layered with itıs own circuitry, also provide a much more manageable memory source. The only reason magnetic tape backups are used today is because of the space which is needed in order to backup an entire computer. Memory for todays computers consist of RAM or ROM. ROM is unchangable and stores the computers most vital componants, itıs operating instructions. Without this, the computer would be completly inoperable. Programs today use the instructions in the ROM to complete the tasks the program is attempting. This is why you cannot use IBM programs on a Macintosh, the ROM and operating systems are different, therefor the programing calls are different. Some powerful computers today can complete both sets of tasks because they have both sets of instructions in the stored in the ROM. The reason ROM is unchangable is because of people who donıt know what they are doing could mess things up on their computer forever. RAM is the temporary memory that is in a computer. This is the memory that is used by programs to complete their tasks. RAM is only temporary because it requires a constant electrical charge. Once the computer is shut off, the RAM loses everything that was in it. That is why you lose work that you have done if the power goes of and you didnıt save it first. If something needs to be saved, it is either saved to the hard disk within the computer or a floppy disk. With todayıs networking capabilities, things can be saved on completly seperate machines called ³servers². Though the process of saving is the same, a server can be located five feet away or on the opposite side of the world. With todayıs technology, anything is possible with the use of a computer. You could visit a website and find that special someone, or create a virus that could crash thousands of machines at a single moment in time. If you have the money, the possibilities are endless. In todayıs day and age, information is sacred. One of the biggest problems found with information is what is free and what isnıt. There will always be people who want more information than will be alloted to them, today, these people are known as hackers. Hackers use their individual knowledge to gain access to information that is not meant for them to know. It is almost a shame that hackers have such a bad reputation. Most are teenagers who are looking to gain more information. Of course, some are dedicated to destruction and random violence, but there always will be those types of people in the world. Of course, there is personal information that is transmitted over the Internet that no one but the inteded party and yourself should have access to (i.e. your credit card numbers and expration dates) but who decides what is and what isnıt personal information. This is a problem that has greatly prevented the growth of the Internet into major companies. In the future, it can only get worse. At the rate we are going, everything will be computerized and stored electronicaly. This means that with the know-how, anyone could access your information. If you have ever seen the movie ³The Net² , you know exactly what I am talking about. If all information is stored electronicaly, anyone with the desire can view, change, remove, or add your personal attributes. With enough effort, one could take away someoneıs entire identity. This may seem like a futuristic sci-fi novel, but it could be in our not so distant future. The future of technology can only be guessed upon. I believe that the connection between computers and humans will become much closer. People will feel the need to become ³one² with their machines and possibly even be physicaly linked with them. Information will be stored, transmitted, and viewed completly electronicaly. Perhaps an implant directly into the brain will be the link between humans and computers. This implant could feed you information directly off the Internet and several other sources. I personally believe that this is extremely scary. Once that link is made, there will be the desire to get even closer to the computer. A new, even more intimate link will be made. The cycle will continue untill there is no line between humans and elctronics. We will all be robots just reacting to instructions and following protocol. This is the most horrifing thing I can imagine. Our identities will be removed and we will all become one. I dont know, maybe I need to stop for a minuet before I completly terrify myslef. f:\12000 essays\technology & computers (295)\Pentium Pro Microarchitecture.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A Tour of the Pentium(r) Pro Processor Microarchitecture Introduction One of the Pentium(r) Pro processor's primary goals was to significantly exceed the performance of the 100MHz Pentium(r) processor while being manufactured on the same semiconductor process. Using the same process as a volume production processor practically assured that the Pentium Pro processor would be manufacturable, but it meant that Intel had to focus on an improved microarchitecture for ALL of the performance gains. This guided tour describes how multiple architectural techniques - some proven in mainframe computers, some proposed in academia and some we innovated ourselves - were carefully interwoven, modified, enhanced, tuned and implemented to produce the Pentium Pro microprocessor. This unique combination of architectural features, which Intel describes as Dynamic Execution, enabled the first Pentium Pro processor silicon to exceed the original performance goal. Building from an already high platform The Pentium processor set an impressive performance standard with its pipelined, superscalar microarchitecture. The Pentium processor's pipelined implementation uses five stages to extract high throughput from the silicon - the Pentium Pro processor moves to a decoupled, 12-stage, superpipelined implementation, trading less work per pipestage for more stages. The Pentium Pro processor reduced its pipestage time by 33 percent, compared with a Pentium processor, which means the Pentium Pro processor can have a 33% higher clock speed than a Pentium processor and still be equally easy to produce from a semiconductor manufacturing process (i.e., transistor speed) perspective. The Pentium processor's superscalar microarchitecture, with its ability to execute two instructions per clock, would be difficult to exceed without a new approach. The new approach used by the Pentium Pro processor removes the constraint of linear instruction sequencing between the traditional "fetch" and "execute" phases, and opens up a wide instruction window using an instruction pool. This approach allows the "execute" phase of the Pentium Pro processor to have much more visibility into the program's instruction stream so that better scheduling may take place. It requires the instruction "fetch/decode" phase of the Pentium Pro processor to be much more intelligent in terms of predicting program flow. Optimized scheduling requires the fundamental "execute" phase to be replaced by decoupled "dispatch/execute" and "retire" phases. This allows instructions to be started in any order but always be completed in the original program order. The Pentium Pro processor is implemented as three independent engines coupled with an instruction pool as shown in Figure 1 below. What is the fundamental problem to solve? Before starting our tour on how the Pentium Pro processor achieves its high performance it is important to note why this three- independent-engine approach was taken. A fundamental fact of today's microprocessor implementations must be appreciated: most CPU cores are not fully utilized. Consider the code fragment in Figure 2 below: The first instruction in this example is a load of r1 that, at run time, causes a cache miss. A traditional CPU core must wait for its bus interface unit to read this data from main memory and return it before moving on to instruction 2. This CPU stalls while waiting for this data and is thus being under-utilized. While CPU speeds have increased 10-fold over the past 10 years, the speed of main memory devices has only increased by 60 percent. This increasing memory latency, relative to the CPU core speed, is a fundamental problem that the Pentium Pro processor set out to solve. One approach would be to place the burden of this problem onto the chipset but a high-performance CPU that needs very high speed, specialized, support components is not a good solution for a volume production system. A brute-force approach to this problem is, of course, increasing the size of the L2 cache to reduce the miss ratio. While effective, this is another expensive solution, especially considering the speed requirements of today's L2 cache SRAM components. Instead, the Pentium Pro processor is designed from an overall system implementation perspective which will allow higher performance systems to be designed with cheaper memory subsystem designs. Pentium Pro processor takes an innovative approach To avoid this memory latency problem the Pentium Pro processor "looks-ahead" into its instruction pool at subsequent instructions and will do useful work rather than be stalled. In the example in Figure 2, instruction 2 is not executable since it depends upon the result of instruction 1; however both instructions 3 and 4 are executable. The Pentium Pro processor speculatively executes instructions 3 and 4. We cannot commit the results of this speculative execution to permanent machine state (i.e., the programmer-visible registers) since we must maintain the original program order, so the results are instead stored back in the instruction pool awaiting in-order retirement. The core executes instructions depending upon their readiness to execute and not on their original program order (it is a true dataflow engine). This approach has the side effect that instructions are typically executed out-of-order. The cache miss on instruction 1 will take many internal clocks, so the Pentium Pro processor core continues to look ahead for other instructions that could be speculatively executed and is typically looking 20 to 30 instructions in front of the program counter. Within this 20- to 30- instruction window there will be, on average, five branches that the fetch/decode unit must correctly predict if the dispatch/execute unit is to do useful work. The sparse register set of an Intel Architecture (IA) processor will create many false dependencies on registers so the dispatch/execute unit will rename the IA registers to enable additional forward progress. The retire unit owns the physical IA register set and results are only committed to permanent machine state when it removes completed instructions from the pool in original program order. Dynamic Execution technology can be summarized as optimally adjusting instruction execution by predicting program flow, analysing the program's dataflow graph to choose the best order to execute the instructions, then having the ability to speculatively execute instructions in the preferred order. The Pentium Pro processor dynamically adjusts its work, as defined by the incoming instruction stream, to minimize overall execution time. Overview of the stops on the tour We have previewed how the Pentium Pro processor takes an innovative approach to overcome a key system constraint. Now let's take a closer look inside the Pentium Pro processor to understand how it implements Dynamic Execution. Figure 3 below extends the basic block diagram to include the cache and memory interfaces - these will also be stops on our tour. We shall travel down the Pentium Pro processor pipeline to understand the role of each unit: •The FETCH/DECODE unit: An in-order unit that takes as input the user program instruction stream from the instruction cache, and decodes them into a series of micro-operations (uops) that represent the dataflow of that instruction stream. The program pre-fetch is itself speculative. •The DISPATCH/EXECUTE unit: An out-of-order unit that accepts the dataflow stream, schedules execution of the uops subject to data dependencies and resource availability and temporarily stores the results of these speculative executions. •The RETIRE unit: An in-order unit that knows how and when to commit ("retire") the temporary, speculative results to permanent architectural state. •The BUS INTERFACE unit: A partially ordered unit responsible for connecting the three internal units to the real world. The bus interface unit communicates directly with the L2 cache supporting up to four concurrent cache accesses. The bus interface unit also controls a transaction bus, with MESI snooping protocol, to system memory. Tour stop #1: The FETCH/DECODE unit. Figure 4 shows a more detailed view of the fetch/decode unit: Let's start the tour at the Instruction Cache (ICache), a nearby place for instructions to reside so that they can be looked up quickly when the CPU needs them. The Next_IP unit provides the ICache index, based on inputs from the Branch Target Buffer (BTB), trap/interrupt status, and branch-misprediction indications from the integer execution section. The 512 entry BTB uses an extension of Yeh's algorithm to provide greater than 90 percent prediction accuracy. For now, let's assume that nothing exceptional is happening, and that the BTB is correct in its predictions. (The Pentium Pro processor integrates features that allow for the rapid recovery from a mis-prediction, but more of that later.) The ICache fetches the cache line corresponding to the index from the Next_IP, and the next line, and presents 16 aligned bytes to the decoder. Two lines are read because the IA instruction stream is byte-aligned, and code often branches to the middle or end of a cache line. This part of the pipeline takes three clocks, including the time to rotate the prefetched bytes so that they are justified for the instruction decoders (ID). The beginning and end of the IA instructions are marked. Three parallel decoders accept this stream of marked bytes, and proceed to find and decode the IA instructions contained therein. The decoder converts the IA instructions into triadic uops (two logical sources, one logical destination per uop). Most IA instructions are converted directly into single uops, some instructions are decoded into one-to-four uops and the complex instructions require microcode (the box labeled MIS in Figure 4, this microcode is just a set of preprogrammed sequences of normal uops). Some instructions, called prefix bytes, modify the following instruction giving the decoder a lot of work to do. The uops are enqueued, and sent to the Register Alias Table (RAT) unit, where the logical IA-based register references are converted into Pentium Pro processor physical register references, and to the Allocator stage, which adds status information to the uops and enters them into the instruction pool. The instruction pool is implemented as an array of Content Addressable Memory called the ReOrder Buffer (ROB). We have now reached the end of the in-order pipe. Tour stop #2: The DISPATCH/EXECUTE unit The dispatch unit selects uops from the instruction pool depending upon their status. If the status indicates that a uop has all of its operands then the dispatch unit checks to see if the execution resource needed by that uop is also available. If both are true, it removes that uop and sends it to the resource where it is executed. The results of the uop are later returned to the pool. There are five ports on the Reservation Station and the multiple resources are accessed as shown in Figure 5 below: The Pentium Pro processor can schedule at a peak rate of 5 uops per clock, one to each resource port, but a sustained rate of 3 uops per clock is typical. The activity of this scheduling process is the quintessential out-of-order process; uops are dispatched to the execution resources strictly according to dataflow constraints and resource availability, without regard to the original ordering of the program. Note that the actual algorithm employed by this execution-scheduling process is vitally important to performance. If only one uop per resource becomes data-ready per clock cycle, then there is no choice. But if several are available, which should it choose? It could choose randomly, or first-come-first-served. Ideally it would choose whichever uop would shorten the overall dataflow graph of the program being run. Since there is no way to really know that at run-time, it approximates by using a pseudo FIFO scheduling algorithm favoring back-to-back uops. Note that many of the uops are branches, because many IA instructions are branches. The Branch Target Buffer will correctly predict most of these branches but it can't correctly predict them all. Consider a BTB that's correctly predicting the backward branch at the bottom of a loop: eventually that loop is going to terminate, and when it does, that branch will be mispredicted. Branch uops are tagged (in the in-order pipeline) with their fallthrough address and the destination that was predicted for them. When the branch executes, what the branch actually did is compared against what the prediction hardware said it would do. If those coincide, then the branch eventually retires, and most of the speculatively executed work behind it in the instruction pool is good. But if they do not coincide (a branch was predicted as taken but fell through, or was predicted as not taken and it actually did take the branch) then the Jump Execution Unit (JEU) changes the status of all of the uops behind the branch to remove them from the instruction pool. In that case the proper branch destination is provided to the BTB which restarts the whole pipeline from the new target address. Tour stop #3: The RETIRE unit Figure 6 shows a more detailed view of the retire unit: The retire unit is also checking the status of uops in the instruction pool - it is looking for uops that have executed and can be removed from the pool. Once removed, the uops' original architectural target is written as per the original IA instruction. The retirement unit must not only notice which uops are complete, it must also re-impose the original program order on them. It must also do this in the face of interrupts, traps, faults, breakpoints and mis- predictions. There are two clock cycles devoted to the retirement process. The retirement unit must first read the instruction pool to find the potential candidates for retirement and determine which of these candidates are next in the original program order. Then it writes the results of this cycle's retirements to both the Instruction Pool and the RRF. The retirement unit is capable of retiring 3 uops per clock. Tour stop #4: BUS INTERFACE unit Figure 7 shows a more detailed view of the bus interface unit: There are two types of memory access: loads and stores. Loads only need to specify the memory address to be accessed, the width of the data being retrieved, and the destination register. Loads are encoded into a single uop. Stores need to provide a memory address, a data width, and the data to be written. Stores therefore require two uops, one to generate the address, one to generate the data. These uops are scheduled independently to maximize their concurrency, but must re-combine in the store buffer for the store to complete. Stores are never performed speculatively, there being no transparent way to undo them. Stores are also never re- ordered among themselves. The Store Buffer dispatches a store only when the store has both its address and its data, and there are no older stores awaiting dispatch. What impact will a speculative core have on the real world? Early in the Pentium Pro processor project, we studied the importance of memory access reordering. The basic conclusions were as follows: •Stores must be constrained from passing other stores, for only a small impact on performance. •Stores can be constrained from passing loads, for an inconsequential performance loss. •Constraining loads from passing other loads or from passing stores creates a significant impact on performance. So what we need is a memory subsystem architecture that allows loads to pass stores. And we need to make it possible for loads to pass loads. The Memory Order Buffer (MOB) accomplishes this task by acting like a reservation station and Re-Order Buffer, in that it holds suspended loads and stores, redispatching them when the blocking condition (dependency or resource) disappears. Tour Summary It is the unique combination of improved branch prediction (to offer the core many instructions), data flow analysis (choosing the best order), and speculative execution (executing instructions in the preferred order) that enables the Pentium Pro processor to deliver its performance boost over the Pentium processor. This unique combination is called Dynamic Execution and it is similar in impact as "Superscalar" was to previous generation Intel Architecture processors. While all your PC applications run on the Pentium Pro processor, today's powerful 32-bit applications take best advantage of Pentium Pro processor performance. And while our architects were honing the Pentium Pro processor microarchitecture, our silicon technologists were working on an advanced manufacturing process - the 0.35 micron process. The result is that the initial Pentium Pro Processor CPU core speeds range up to 200MHz. f:\12000 essays\technology & computers (295)\pg.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Joystick Port Power Glove Here's something useful to do with those old Nintentdo PowerGloves collecting dust in the closet! I will present to you how you can use parts of the Mattel/Nintendo Power Glove to make your own input devices. Step 1. Remove the flexible resistor strips from the powerglove's fingers. To do this, you must peel the black "glove" from the grey plastic part, as shown: Note: If you choose, you may remove the rest of the electronics on the glove and use it as it is. I chose to remove the strips and sew them onto a glove that fit my hand better. Step 2. Cut along the clear plastic tubes surrounding the brown flexible sensor strips, to free the strips from the grey plastic. De-solder or cut the wires connecting the sensors to the glove. sensor strip Step 3. Sew the sensors onto a glove that fits your hand. Notice that the sensors bend one direction better than the other. Keep this in mind when placing them on your glove (or whatever else you build). I used this soccer glove because it had fabric pieces sewn over the fingers. I simply cut the stitches and put the sensors under the fabric. I later found it necessary to sew one end of the sensor to the glove to hold it in place. The maximum resistance value of the sensors I used was 150K ohms. I connected my glove to my pc through the joystick port, using positions 0 and 1. I later added a 19K ohm resistor in parallel with each sensor to increase the sensitivity for the pc joystick port. I have included the pin diagram of a typical pc joystick port. Table was referenced from The Pocket Ref, compiled by Thomas J. Glover, published by Sequoia Publishing, Inc. You connect one pole of the resistor strip to +5 volts and the other pole to one of the coordinate positions on the joystick port. Pin Description 1 +5 volts (from computer) 2 Button 1 input 3 Position 0, X - Coordinate 4 Ground 5 Ground 6 Position 1, Y - Coordinate 7 Button 2 input 8 +5 volts 9 +5 volts 10 Button 3 input 11 Position 2, X - Coordinate 12 Ground 13 Position 3, Y - Coordinate 14 Button 4 input 15 +5 volts WARNING! Do not attempt to plug anything you build into you computer unless you are ABSOLUTELY certain you know what you are doing. You can cause permanent damage to your hardware! The Visual Basic program and joystick driver I used to test my glove is available for ftp. This program runs out of windows. To use it you must install the joystick driver included in the .zip. I have included the source code and make file for the program so you can see how it works, if you have VB. The program is simple to use. Run it, make the gesture you want to recall, press the corresponding button, and watch the recall window for results. The text on the button will change from red to green when a gesture is stored. Adjust the fuzz factor to increase or decrease sensitivity (between 2500 and 8000 is usually good). Position values are also displayed for the finger and thumb. About Joe Back to Projects Page This page provided by: Space Research Group f:\12000 essays\technology & computers (295)\Piracy.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ October 28, 1996 Ian Sum Recently, The Toronto Star published an article entitled "RCMP seizes BBS, piracy charges pending." The RCMP have possessed all computer components belonging to the "90 North" bulletin board system in Montreal, Quebec. The board is accused of allowing people the opportunity to download (get) commercial and beta (or commercial) software versions. I feel that the RCMP should not charge people that are linked to computer piracy, because the pirated software offers valuable opportunity to programmers and users. Also, revenue lost to the large software companies is such a small amount that the effect won't be greatly felt by them and so it is not worth the policing effort required to track down the pirates. When pirates distribute the illegal software, one could say that they are helping, than hurting the software companies. By distributing the software world wide, it creates great advertisement for the software companies and their products. Although the software company is losing profits from that particular version, it could generate future sales with other versions. Also, when the pirates distribute the software this could be a great source of test data for the software companies. This is an effective way to catch any unfounded bugs in the software program. From debugging to hacking, hackers can benefit the most. They can study and learn from the advancements with in the programming. So what does all this activity tell us? This tells us the people are willing to go to great lengths to get software at a lower cost, or possibly in exchange for other software and that they are succeeding in their efforts. Although more than 50% of their software income is from other companies which do not pirate, this poses a problem for the software industries. By fining a single bulletin board out of the thousands in North America, there would be little accomplished. Not to mention the fact the it is extremely difficult to prove and convict people under the Copyright Act. In today's society, revenue from software is such a small income source for corporations such as WordPerfect Corp. These companies make their money mainly from individuals purchasing extra manuals, reference material, supplementary hardware, and calling product support. Software companies are conscious of the pirate world and the changes they have made. Some companies actually want you to take the software by using the SHAREWARE concept. In SHAREWARE one gets a chance to use demo programs and then pay for the full purchase if he feels it is worthwhile. It is a bit like test driving a car, before one buys. In most cases, users are happy and end up purchasing complete software. Most software companies are still in business, and still bringing up more technological advancements that entice users to continually buy newer versions. The companies, in this sense , have outsmarted and beaten the pirates. Violation of the Copyright Act seems to benefit software companies more than it hurts them. Their software gets more exposure which leads to more software revenue in the end than revenue that is lost through piracy. The opportunity cost is worth it in the end. Cracking down on software piracy is a waste of societies energy. There is more benefit for everyone the way things are in the present. Users get to view and evaluate it before they pay. Hackers get a opportunity to view other works and learn from the advancements on or find the errors in the beta versions. Software companies get more exposure which in the long run will lead to more revenues for them. f:\12000 essays\technology & computers (295)\Policies and Procedures Manual Forms Analysis and Design.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Policies and Procedure Guidelines Page 1 of 14 Section 1.1: Forms Analysis and Design Effective date: March 6, 1997 Issued by Approved by: 1.1 FORMS ANALYSIS AND DESIGN 1.1.1 WHAT IS A FORM? A form is basically a fixed arrangement of captioned spaces designed for entering and obtaining pre-described information. A form is considered effective if it is: · easy to complete · easy to use · easy to store · easy to retrieve information quickly · easy to dispose 1.1.2 HOW IS IT IMPORTANT? In a business, forms and design are greatly needed to allow the company to better organize the way they want their business to operate smoothly and efficiently. Although the presence of forms and design in a company ensures that the company will run better, be able to make better decisions and be able to coordinate activities more easily, these forms and design programs must be covered in the companies budget, in terms of costs. The company will have to make sure that its forms and designs are a unique standard throughout the company and not different in separate sections of the companies total make-up. If, by chance the presence of a universal form in a certain section of the company is a disadvantage rather than an advantage, the forms and policies of other companies may be looked at in order to correct the problem. When creating a form, companies may use the same standard techniques before making changes to make the form right for its company. Some basic techniques are making sure that the form is easy to fill in, takes minimal time to fill-in, it has a functional layout and it contains an attractive visual appearance. After using the basic standards of form design, the forms analysists', spend countless hours making the design a unique standard for their company, while considering every section of the company, so that the form will be useful to every member of the company. Policies and Procedure Guidlines Page 2 of 14 Section 1.2: Tools and Aids For Forms and Design Effective date: March 6, 1997 Issued by: Approved by: 1.2 TOOLS AND AIDS FOR FORMS DESIGNING Many companies use the same basic tools to design their forms. In the past when forms were designed, many "traditional tools" were used to design forms. Some of those tools include the following: · pencils, erasers · rulers, triangles · tracing paper · lettering and symbol templates · cutting tools · masking tape and cellophane tape · correction fluid · rubber cement Now, because of new technology and easier ways to design forms, most of these tools are obsolete. New computer hardware and software have provided many tools and accessories which have allowed companies to train employees to design forms using these advanced tools. Software packages such as Corel Draw, Microsoft Office, which includes Word, Excel, Access and PowerPoint along with WordPerfect, PowerBuilder, Visual Basic and many other software packages have made tasks easier to complete. Their amazing accurate and precise design tools provide "picture-perfect" quality. 1.2.1 Computer Hardware and Software · Pentium Computers Today most designers use computers especially Pentium computers because of their speed and performance. Policies and Procedure Guidelines Page 3 of 14 Section 1.2: Tools and Aids For Forms and Design Effective date: March 6, 1997 Issued by: Approved by: · Corel Draw There are several different software packages that can be used to design the forms. Many companies recommend Corel Draw. It is an excellent choice to use for designing the form as you would want it on paper. There are excellent designing tools included in the Corel Package which allows you to draw lines of any size, color or shape. It also allows you to insert grids, graphics, graphs or images with different border styles and sizes. · Microsoft Word After designing the physical appearance of the form with style and borders, Microsoft Word will be used to fill in the form's information because of the various fonts that are available. Also, Microsoft Word's ability to change font size, and either, bold, underline or italicize wording, will be very useful in the creation of the text that will appear in the form. · Microsoft Excel This section of Microsoft Office can be used by the designers to design grids and graphs that might be needed to represent data in the form. Grids and tables may be inserted into the form to hold data that the applicant may need to fill. Different types of graphs such as pie charts, line graphs, column graphs and combination graphs may be needed to represent a question in the form. For example, the applicant may need to fill in what percentage he/she belongs to as compared to the rest of the field represented by the graph. · Microsoft Access This section of Microsoft Office can be used to design databases. The designers may want to include previously designed tables or create new tables to insert into forms. They may also want to only include portions of tables in which they can create queries so that the tables they insert includes only the information that they specified. Policies and Procedure Guidelines Page 4 of 14 Section 1.2: Tools and Aids For Forms and Design Effective date: March 6, 1997 Issued by: Approved by: · Printers An Epson III Laser Jet Color Printer can be used to print the forms. The laser quality will provide the crisp and clear texture of lines and text, along with bright colors to make the form more attractive and visually appealing. Although any laser printer, will provide excellent quality, the color laser jets printers makes the forms more attractive because of how the different colors distinguish between the different sections of the form. · Saving Forms All the forms will that are designed by the company should be backed up on the hard drive of the computers. The forms will be saved whether they were used or not, in case of changes in the form's design or in case the company wants to improve on a previously designed form. The forms will also be saved on floppy disks, just in case of viruses, malfunctions in the computer or hard drive upgrading and formatting. Policies and Procedure Guidelines Page 5 of 14 Section 1.3: Designing Procedures Effective date: March 6, 1997 Issued by: Approved by: 1.3 DESIGNING PROCEDURES The two major objectives of this process is: 1) collecting information, which is its reason for existence 2) facilitating a format for the form, which is standard. 1.3.1 Facilitative Area The forms are a very important aspect of a company because they provide the information of each employee that the employers wish to know. Since most companies use a standardized format, each company must contain its title and identify the type of form that the applicant is filling out . It is also useful to include the name of the department, date, codes and instructions that may be necessary to complete the form. · Identification The title of the form will be placed at the top center of the form and in any case where the form contains more than one invoice, it should include subtitles to distinguish it from the rest of the forms. If the forms will be filed, it will be helpful to place the title in the "visible area" of the form, which would be the area visible on the form when it is in a filing cabinet or some other type of filing. · Form Numbers The forms will also include form numbers which will be placed in either of the lower corners on each page of the form. This will prevent the form numbers from being covered by staples and it won't interfere with the working area of the form. It will also serve as an aid in stocking the forms in small quantities. Policies and Procedure Guidelines Page 6 of 14 Section 1.3: Designing Procedures Effective date: March 6, 1997 Issued by: Approved by: · Page Numbers It is also very important to ensure that all the pages of the form contain page numbers for various reasons. This will be helpful in identifying what page of the form it is and help make it easier to sort out forms, especially if they contain more than one page. The page numbers should be placed in the upper right hand corner of the page so that when the form is opened the number of the page will be easier to see when the pages are stapled in the upper left corner. (EX: Page 1 of **) · Edition Date The company should ensure that all the forms contain edition dates which show when the form was made. The form should also show how long they will be valid before they need to be updated again. The edition dates will be included with form numbers. · Supersession Notice This is simply a method of notifying users and workers in the supply room so that they will know when a new form has been created has replaced the older version of the form. It is also used when a newer version of the previous form has been revised. This notice is usually printed in the bottom margin of the form. It should let the user know if the form has been replaced and what the number of the new form is. If more than one form is used to replace a single form, then a separate notice should would be more appropriate to inform effective personnel of the change. · Expiration Dates and Approval of Forms If a form is to be used for only a limited of time, then it should contain expiration dates and limit dates. These will let the users no when and how long the form will be valid and when they should get another one. Because many forms have to be approved by a company first before they are distributed to users, they must allow room for the company to state its approval number, signature or symbol, along with the date that the form was approved. Policies and Procedure Guidelines Page 7 of 14 Section 1.3: Designing Procedures Effective date: March 6, 1997 Issued by: Approved by: · Emblems and Symbols After the forms are approved by the company, the designers must insert the company's emblem or logo on the form. This will validate the form as property of that company and act sort of like a patent so that it won't be used by any other companies. · Comments and Suggestions In order to have room for improvement on the forms, there should be enough space for any comments or suggestions that the authorizing department wishes to leave when approving the form. The form will have to be approved by the department before the companies logo or seal can be placed on the form. and it will have to contain the companies logo before the form will be valid. Policies and Procedure Guidelines Page 8 of 14 Section 1.4: Instructions Effective date: March 6, 1997 Issued by: Approved by: 1.4 INSTRUCTIONS 1.4.1 General Instructions To ensure that the forms are easy to fill out, each form will contain instructions for completing the form and what to do with the forms after completing them. The instructions should be brief. The instructions that are located under the title of the form will be basic, general instructions that tell the applicant what to do with the form, why they are filling it out and who they should give it to when they are finished. This should be read by the user before completing the form. 1.4.2 Lengthy Instructions In any case where the form is lengthy and requires a lot of thought to fill it out, an instruction booklet should be included with the form. These instructions are more lengthy but explain more about filling out the form. They should try to answer any questions that the applicant may have about his/her choices while completing the form. These instructions will explain clearly how to fill out the form, including what is mandatory to fill in and what sections are optional. These instructions should be sort of like a written procedure that explains the form in a sort of summary. The font size of the wording should be carefully designed to make sure that the words are big enough and the lines should be double spaced to make sure that the instructions are clear enough to read and understand. An acceptable reading font size is around 12pt or 14 pt size. Times New Roman, Arial or Courier are standard true type fonts that are clear and easy to read. 1.4.3 Section Instructions There will also be instructions included in each section. These instructions will explain clearly how to fill out each the section of the form. It will contain information on whether or not the section needs to be filled out in order to determine full completion of the form. Policies and Procedure Guidelines Page 9 of 14 Section 1.5: Addressing and Mailing Effective date: March 6, 1997 Issued by: Approved by: 1.5 ADDRESSING AND MAILING 1.5.1 Self-Routing On the bottom of the last page of the form or on the back of the last page, there will be a space for the address of the employer and a space for the applicant to fill in his/her address, along with extra space in case the form has to be sent to multiple routes. This will make it easier for the forms to be transferred to the employer and increase the capability of self - routing mail. When addressing to a certain employer, job titles should be used instead of names just in case changes in departments should occur due to promotions or lay-offs. This will change the positions held by certain employees who are in control of certain departments which means different responsibilities for these people. 1.5.2 E-Mailing and Faxing Companies that have email will be at an advantage. They will be able to email a copy of the form to the user and have them fill out the appropriate information and then email the results back to the employer For companies that don't have email, fax machines are also useful. They can simply fax the forms to the employees or applicants. The employees can then fill it out and then fax it or bring the form to the employer in person. 1.5.3 Personal MailBoxes In most companies, employers and employees have their own personal mailboxes. By including both the address of the employee and the employer, it is easier for employees or users to transfer forms to the employer. In the event that the employer may be out on a business trip, the applicants may simply drop the forms into the employers mailboxes to meet deadlines. Policies and Procedure Guidelines Page 10 of 14 Section 1.6: Form Layout Effective date: March 6, 1997 Issued by: Approved by: 1.6 FORM LAYOUT · Sheet Size The forms should be designed on 8 1/2" x 11" carbon paper with a carbon sheet on the back, so that the person filling out the form can keep a copy for him/herself. The sections of the forms should be placed on both sides of the paper to save paper. The information on the forms should not be crammed so that some important information could possibly left out or so that it would make it harder to read the questions due to poor spacing or small lettering. · Margins The form should have half inch margins on all sides so that the wording won't be too close to the end of the page. This allows the user or reader to hold the paper without covering any wording on the form. · Spacing The amount of horizontal and vertical spacing is determined by the amount of headings and sub-headings, size and style of text and the amount of space left for fill in answers. · Box Format The form will follow a box format which will increase space because the information will go to each end of the page margin. It will have exceptional horizontal and vertical spacing to enable easier reading. · Borders and Bolding The different sections of the form will be divided by solid black lines. The headings and sub-headings will be bolded and larger than the question text in order to improve the visual appearance of each section of the form. Policies and Procedure Guidelines Page 11 of 14 Section 1.6: Form Layout Effective date: March 6, 1997 Issued by: Approved by: · Shading Shading will also be used in the sections where no information is required to make it easier for the applicant to know what sections he/she needs to fill in. This would also be used to highlight sections that need to be filled in, but not by the applicant. For example, some forms have sections that specify "for office use only" meaning that they don't have to fill out any information in that section. · Answer Spaces There will be spaces indicated on the right side of the section that will be lined aligned with one another. They will be used for filling in information that contain only numbers or a letter code. In the case that the answers to the question requires several lines in order to answer it, there will be more than enough space available to appropriately answer the question. Therefore the information must be clear and widely spaced so that it is very easy to fill out the forms. Policies and Procedure Guidelines Page 12 of 14 Section 1.7: Breakdown of Form Arrangements Effective date: March 6, 1997 Issued by: Approved by: 1.7 BREAKDOWN OF FORM ARRANGEMENTS The form should be set up in a way to make it easier for the applicants to fill in. The sections of the forms will be organized so that all the related parts of the form are placed one after the other to avoid reading back through the form. The form will have headings and sub-heading which define which section of the form you are filling out and help you understand what kind of information you should fill in. 1.7.1 Beginning The personal information will be placed at the first of the form. This will contain things such as the applicants name, address, phone number, and date of birth . 1.7.2 Body This will contain the basic purpose of the form. It will have the questions that will be needed to complete the form, depending on what kind of form it is. For example, if it was an application for applying for a job, the beginning would include the items mentioned above in the beginning section. The body, would contain, previous education, previous employment, the position you wish to apply for and your references. 1.7.3 Ending This section of the form will have spaces to fill in the address of the person you wish to send it to, along with your own address. It will have several spaces in case you wish to send it to more than one person. Policies and Procedure Guidelines Page 13 of 14 Section 1.8: Revising an Existing Form Effective date: March 6, 1997 Issued by: Approved by: 1.8 REVISING AN EXISTING FORM There are many things to consider when revising a form: · Previous forms will be considered to be obsolete · Previous editions of forms can be used until there are no more left. Companies can use the older forms until there are no more left before presenting a new form. · Existing stocks which include the form number and edition date can be used. The now obsolete forms, will be replaced by new ones, but the form numbers and editions dates will be transferred on to the new forms. Policies and Procedure Guidelines Page 14 of 14 Section 1.9: Replacing Existing Forms with Different Numbers Effective date: March 6, 1997 Issued by: Approved by: 1.9 REPLACING EXISTING FORMS WITH DIFFERENT NUMBERS · You first have to replace the form numbers and edition dates which are now considered to be obsolete. · Instead of replacing the number and dates right away, you can wait until there are no more forms left and then make the changes to the new forms. f:\12000 essays\technology & computers (295)\Polymorphic and Cloning Computer Viruses.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Polymorphic & Cloning Computer Viruses The generation of today is growing up in a fast-growing, high-tech world which allows us to do the impossibilities of yesterday. With the help of modern telecommunications and the rapid growth of the personal computer in the average household we are able to talk to and share information with people from all sides of the globe. However, this vast amount of information transport has opened the doors for the computer "virus" of the future to flourish. As time passes on, so-called "viruses" are becoming more and more adaptive and dangerous. No longer are viruses merely a rarity among computer users and no longer are they mere nuisances. Since many people depend on the data in their computer every day to make a living, the risk of catastrophe has increased tenfold. The people who create computer viruses are now becoming much more adept at making them harder to detect and eliminate. These so-called "polymorphic" viruses are able to clone themselves and change themselves as they need to avoid detection. This form of "smart viruses" allows the virus to have a form of artificial intelligence. To understand the way a computer virus works and spreads, first one must understand some basics about computers, specifically pertaining to the way it stores data. Because of the severity of the damage that these viruses may cause, it is important to understand how anti-virus programs go about detecting them and how the virus itself adapts to meet the ever changing conditions of a computer. In much the same way as animals, computer viruses live in complex environments. In this case, the computer acts as a form of ecosystem in which the virus functions. In order for someone to adequately understand how and why the virus adapts itself, it must first be shown how the environment is constantly changing and how the virus can interact and deal with these changes. There are many forms of computers in the world; however, for simplicity's sake, this paper will focus on the most common form of personal computers, the 80x86, better known as an IBM compatible machine. The computer itself is run by a special piece of electronics known as a microprocessor. This acts as the brains of the computer ecosystem and could be said to be at the top of the food chain. A computer's primary function is to hold and manipulate data and that is where a virus comes into play. Data itself is stored in the computer via memory. There are two general categories for all memory: random access memory (RAM) and physical memory (hard and floppy diskettes). In either of those types of memory can a virus reside. RAM is by nature temporary; every time the computer is reset the RAM is erased. Physical memory, however, is fairly permanent. A piece of information, data, file, program, or virus placed here will still be around in the event that the computer is turned off. Within this complex environment, exists computer viruses. There is no exact and concrete definition for a computer virus, but over time some commonly accepted facts have been related to them. All viruses are programs or pieces of programs that reside in some form of memory. They all were created by a person with the explicit intent of being a virus. For example, a bug (or error) in a program, while perhaps dangerous, is not considered a computer virus due to the fact that it was created on accident by the programmers of the software. Therefore, viruses are not created by accident. They can, however, be contracted and passed along by accident. In fact it may be weeks until a person even is aware that their computer has a virus. All viruses try to spread themselves in some way. Some viruses simply copy clones of themselves all over the hard drive. These are referred to as cloning viruses. They can be very destructive and spread fast and easily throughout the computer system. To illustrate the way a standard cloning virus would adapt to its surroundings a theoretical example will be used. One day a teacher decides to use his/her classroom Macintosh's Netscape to download some material on photosynthesis. Included in that material is a movie file which illustrates the process. However, the teacher is not aware that the movie file is infected with a computer virus. The virus is a section of binary code attached to the end of the movie file that will execute its programmed operations whenever the file is accessed. Then, the teacher plays the movie. As the movie is being played the virus makes a clone of itself in every file inside the system folder of that computer. The teacher shuts down the computer normally, but the next day when it is booted up all of the colors are changed to black and white. The explanation is that the virus has been programmed to copy itself into all of the files that the computer accesses in a day. Thus, when the computer reboots, the Macintosh operating system looks into the system folder at a file to see how many colors to use. The virus notices it access this file and immediately copies it self into it and changes the number of colors to two. Thus the virus has detected a change in the files that are opened in the computer and adapted itself by placing a clone of itself into the color configuration files. Another prime way that viruses are spread throughout computers extremely rapidly is via LANs (Local Area Networks) such as the one setup at Lincoln that connects all of the classroom Macs together. A LAN is a group of computers linked together with very fast and high capacity cables. Below is an illustrated example of a network of computers: Since all of the computers on a network are connected together already, the transportation of a virus is made even easier. When the "color" virus from the above example detects that the computer is using the network to copy files across the school, it automatically clones a copy of itself into every file that is transported across the network. When it reaches the new computer it waits until it has been shut off then turned back on again to copy itself into the color configuration files and change the display to black and white. If this computer should then log on to the network, the virus will transport again. In this manner network capable viruses can very quickly adapt and cripple an entire corporation or office building. Do to the severity of some viruses, people have devised methods of detecting and eradicating them. The anti-viral programs will scan the entire hard drive looking for evidence that viruses may have infected it. These programs must be told very specifically what to look for on the hard drive. There are two main methods of detecting viruses on a computer. The first is to compare all of the viruses on the hard disk to known types of viruses. While this method is very precise, it can be rendered totally useless when dealing with a new and previously unknown virus. The other method deals with the way in which a common cloning virus adapts. All that a cloning virus really does is look at what operations the computer is executing and react and adapt to them by making more copies of itself. This is the serious flaw with cloning viruses: all the copies of itself look the same. Basically all data in a computer is stored in a byte structure format. These bytes, which are analogous to symbols, occur in specific orders and lengths. Each of the cloned viruses has the same order and length of the byte structure. All that the anti-virus program has to do is scan the hard drive for byte structures that are duplicated several times and delete them. This method is an excellent way of dealing with the adaptive and reproducing format of cloning viruses. The disadvantage is that it can produce a number of false alarms such as when a user has two copies of the same file. Thereby, a simple cloning viruses' main flaw is exposed. However, the (sick minded) people who create these viruses have founded a way to get around this by creating a new and even more adaptive virus called the polymorphic virus. Polymorphic viruses were created with the explicit intent of being able to adapt and reproduce in ways other than simple cloning. These viruses contain a form of artificial intelligence. While this makes them by no means as smart or adaptive as a human being, it does allow them to avoid conventional means of detection. A conventional anti-virus program searching for cloned viruses will not think files with different byte-structures as are viruses. A good analogy for a polymorphic virus would be a chameleon. The chameleon is able to change its outward appearance but not the fact that it is a chameleon. A polymorphic virus's main goal is just like that of any other virus: to reproduce itself and complete some programmed task (like deleting files or changing the colors of the monitor); this fact is never changed. However, it is the way in which they reproduce that makes them different. A polymorphic virus does more to adapt than just make copies of itself into other files. In fact, it does not really even clone its physical byte structure. Instead it creates other programs with different byte structures that are attempting to perform the same task. In a sense, polymorphic viruses are smart enough to evolve itself by writing new programs on the fly. Because of the fact that they all have different byte structures, they pass undetected through conventional byte comparison anti-viral techniques. Not only are polymorphic viruses smart enough to react to their environment by adaptation, but they are able to do it in a systematic way that will prevent their future detection and allow them to take on a new life of their own. Computer viruses are extremely dangerous programs that will adapt themselves to the ever changing environment of memory by making copies of themselves. Cloning viruses create exact copies of themselves and attach to other files on the hard drive in an attempt to survive detection. Polymorphic viruses are able to change their actual appearance in memory and copy themselves in much the same way that a chameleon can change colors to avoid a predator. It is not only the destructive nature of computer viruses that make them so dangerous in today's society of telecommunications, but also their ability to adapt themselves to their surroundings and react in ways that allow them to proceed undetected to wreck more havoc on personal computer users across the globe. Bibliography Rizzello, Michael. Computer Viruses. Internet. http://business.yorku.ca /mgts4710/rizello/viruses.htm Solomon, Dr. Alan. A Guide to Viruses. Internet. http://dbweb.agora.stm.it/ webforum/virus/viruinfo.htm Tippett, Peter S. Alive! Internet. http://www.bocklabs.wisc.edu/~janda/alive10.html. 1995. "Virus (computer)," Microsoft (R) Encarta. Copyright (c) 1993 Microsoft Corporation. Copyright (c) 1993 Funk & Wagnall's Corporation Yetiser, Tarkan. Polymorphic Viruses. VDS Advanced Research Group. Baltimore, 1993. f:\12000 essays\technology & computers (295)\Pornography on the Internet Freedom of Speech or Dangerous .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Pornography on the Internet Internet Pornography: Freedom of Press or Dangerous Influence? The topic of pornography is controversial many times because of the various definitions which each have different contexts. Is it nudity, sexual intercourse, art, or all of these? Is it magazines, videos, or pictures? For the purposes of this paper, pornography will be defined as any material that depicts erotic behavior and is intended to cause sexual excitement. With all of the arguments presented in this paper, it seems only a vague definition of this type can be applicable to all views on the subject. Pornography on the Internet has brought about difficulties pertaining to censorship. All of the arguments in this paper can be divided into one of two categories: those whose aim is to allow for an uncensored Internet, and those who wish to completely eliminate pornography from the Internet all together. All arguments for an uncensored Internet all cite the basic rights of free speech and press. While arguments in this paper are international, almost everyone of them cites the First Amendment of the United States. In many of the papers it is implied that the United States sets precedent for the rest of the world as far as laws governing the global world of the Internet. Paul F. Burton, an Information Science professor and researcher, gives many statistics showing that presence of pornography on the Internet is not necessarily a bad thing. He gives one example that shows that "47% of the 11,000" most popular searches on the Internet are targeted to pornography. This fact shows that pornography has given the Internet approximately half of its clientele (2). Without this, the Internet would hardly be the global market that it is today. Most on the Internet are not merely the for pornography either. It is just a part-time activity while not attending to serious matters. At another point in his paper, Burton cites reasons why the Internet is treated differently than other forms of media. The privacy of accessibility is a factor that allows many people to explore pornography without the embarrassment of having to go to a store and buy it. The fact that anybody, including children of unwatchful parents, may access the material. However, Burton believes that these pornographic web sites must be treated the same way as pornographic magazines or videos. One fear of many people is that children will happen across pornography, but as Burton writes in his paper, the odds of someone not looking pornography and finding it are "worse than 70,000:1" (Holderness in Burton 2). Even if a child were to accidentally find an adult site, he or she would most likely see a "cover page" (See Figure 1). These cover pages, found on approximately 70% of adult sites, all have a lot of law jargon that summed up says, "if you are not of age, leave." This cover page will not stop children in search of pornography because all that is required is a click on an "enter" button and one can access the site. Adult verification systems, such as Adult Check and Adult Pass, have been very effective in governing access to these site, but with only 11% of adult sites having a verification of this nature, this system does not seem realistic. Another method of controlling access is use of a credit card number to verify age. This method opens many doors for criminals wishing to obtain these numbers for unlawful use. According to Yaman Akdeniz, a Ph.D. researcher at the Centre for Criminal Justice Studies at the University of Leeds, pornography is not as wide spread as some governments would have us believe. With a total of 14,000 Usernet discussion groups (a place where messages are posted about specific topics), only 200 of them are sexually related. Furthermore, approximately half are related to serious sexual topics, such as abuse or rape recovery groups. Akdeniz also makes the point that "[t]he Internet is a complex, anarchich, and multi-national environment where old concepts of regulation...may not be easily applicable..." (15). This makes a very interesting case about there general nature of the Internet. It is the first electornic media source that is entirely global, and although some countries will and have tried to regulate it, there is no way to mesh what every country does to control the Internet. Germany made an attempt at regulating the Internet within their country, however, the aim was not only to ban pornography but also to ban anti-Semitic newsgroups and web sites. Prodigy, a global network server, helped the German government by blocking these Web sites. When Prodigy was pressured by groups like the American Civil Liberties Union, Prodigy stopped blocking these Web site, and there was nothing Germany could do. This just shows the "power" that the United States holds over the Internet. Two reasons account for this "power." First, 60% of all the information comes from the U.S., and secondly, the U.S. has set up most global laws and regulations. Almost every article pertaining to the Internet freedom or censorship cites the U.S. and bases arguments on the First Amendment. With this precedent setting responsibility, one must look at what is going on in the Supreme Court with regards to the Internet. Peter H. Lewis, a reporter for the New York Times, has been covering the Computer Decency Act since passing of the law. The Computer Decency Act, part of the Telecommunications Act, was passed on February 8, 1996. The main purpose of this section was to halt the "flow of pornography and other objectionable material on the Internet..." (1). This section, however, was declared unconstitutional by a panel of the Supreme Court in June 1996. This overturn caused an uproar among the anti-pornography groups in the United States. The case will be heard once again in June 1997 to ensure that there were First Amendment rights being violated. Judge Stewart R Dalzell, a member of the Supreme Court panel, stated that, "Just as the strength of the Internet is chaos, so the strength of our liberty depends upon the chaos and cacophony of the unfettered speech the First Amendment protects." (Lewis, Judges 1). According to Lewis' next article, no one will be prosecuted under the Internet section of this law until it is constitutionally determined. So as of right now, there is no fear of prosecution for pornographers (Lewis, Federal 1) Maria Semineno, a writer for PCWeek, reported on free speech advocates reations to the overturning of the CDA. Jerry Berman, executive director for the Center for Democracy and Technology, stated that, "[i]t is very clear that Congress is not going to let this alone...." Berman made this statement aluding to the make up of the Supreme Court and what will happen in June of 1997 when the decision is reevaluated. It is argued that the Supreme Court is much divided on the subject of free speech, and therefore, the decision in 1997 will depend upon the panel presiding. When the decision is made it will make one side of the debate triumphant, and the other fighting for their beliefs. Those who view that pornography should be wiped off the Internet entirely site many different reasons. One highly recognized group, the Family Research Council, has determined that pornography on the Internet is harmful to all individuals and concludes that the only way to stop this is to ban pornography, in all it's forms, on the Internet. The FRC categorizes pornography as follows: ...images of soft-core nudity, hard-core sex acts, anal sex, bestiality &dominion, sado- masochism (including actual torture and mutilation, usually of women, for sexual pleasure), scatological acts (defecating and urinating, usually on women, for sexual pleasure), fetishes, and child pornography. Additionally there is textual pornography including detailed stories of rape, mutilation, torture of women, sexual abuse of children, graphic incest, etc ("Computer" 1) In addition to categorizing pornography, the FRC goes on to address questions pertaining to Internet pornography. One question asked is, "IS THE ON-LINE COMMUNITY AGAINST PROPOSALS FOR "DECENCY" ON THE INTERNET?" The answer provided was no and that of the 20 million people on the Internet (an out-dated figure), only 2 percent opposed censorship. However, no citation for this figure was provided ("Computer" 2). The FRC article then goes on to discuss the the main arguments against banning pornography. The article poses the question of possible loss of works of art because of banning. It goes on to cite the "official" definition by the Supreme Court of obscenity, that any object having artistic, educational, or moral value shall not be censored. The article next discusses "technological fixes," such as SurfWatch and NetNanny, that could possibly control pornography from in the home. It gives three points against this method: children can use other computers, children know more about computers that most parents, and people who distribute pornography have no legal reasons not to target children with pornography ("Computer" 2). Cathleen A. Cleaver, head of legal studies for the FRC, backed the Communications Deceny Act. When was overturned, she stated her concerns that not only was the broader sections of the law overturned, "...but also the part that made it illegal to transmit pornography directly to specific children." (Lewis 2). With this section omitted, pornographers may lure children without fearing any repercussions from the law. Although this is the FRC's main concern, they are still fighting for a total ban of pornography on the Internet. Dr Victor B. Cline, a psychotherapist specializing in sexual addiction, argues that massive exposure, such as with the Internet, will cause irrepairalbe damage to society. Cline states that pornography, for all intents and purposes, should be treated as a drug. Cline has treated 350 people with this sexual illness and reports that, "[o]nce involved in pornographic materials, they kept coming back for more." Cline suggests that with availability of pornography reaching these great proportions, that we can expect to see an increase in sexual deviance and sexual illness (Cline 4). Cline next goes on to explain the steps an addict goes through to becoming a sexual deviant. First the person becomes addicted. Secondly, there is an "escalation" of the addiction in which the person becomes engulfed in pornography, even to the point of prefering masturbation to pornography over actual sexual contact. Third, there is a process of "desensitization" which allows acceptance of horrific sexual acts as the norm. Fourth is the "acting out sexually" phase in which a person no longer achieves the satisfaction from the pornography, and in turn, acts upon fantasies, usually based on pornography. Cline's main concern is that with the ease of availability of this type of material, more examples of sexual addiction will occur, not only with adults, but also children will be able to "start out" at an early age (Cline 4-5). This fear is substantiated with the number of pornographic sites with ease of accessibility. With all sides of this issue having their separate reasons to either keep or ban pornography, each makes their case with facts. Pornography is an issue that is difficult to take a side on. People for pornography, or against censorship, state that there is no real reason for this ban. They cite that with of the parental controls and attempts to keep pornography from children are good enough reasons to allow for pornography. However, anti-pornography groups argue that not enough is being done to keep pornography from children, and furthermore, that pornography affects adults just as much as it does children. The issue of pornography is just as controversial as the abortion debate and very similar in many respects. Both sides have strong feelings on what definitions are used, what is morally correct, and the causality of pornography. There is no clear-cut answer, however, it is now up to the U.S. government to make a decision and set precedent for the rest of the world. Works Cited Akdeniz, Yaman. "The Regulation of Pornography and Child Pornography on the Internet." Online. World Wide Web.http://137.205.240.103:80/elj/jilt/biogs/akdeniz.htm. 4 March 1997. Burton, Paul F. "Content on the Internet: Free or Fettered?" (20 Feb. 1996). Online. World Wide Web.http://www.dis.strath.as.uk/people/paul/CIL96.html. 21 Feb. 1997. "Computer Pornography Questions and Answers." (8 Nov. 1996). Online. World Wide Web.http://www.pff.org:80/townhall/FRC/infocus/if95k4pn.html. 8 Mar. 1997. Lewis, Peter H. "Judge Turn Back Law Intended to Regulate Internet Decency." (13 June 1996). Online. America Online. The New York Times Archive. 12 March 1997. ---. "Federal Judge Block Enforcement of CDA.." (16 Feb 1996). Online. America Online. The New York Times Archive. 12 March 1997. Semineno, Maria. "Free speech advocates: CDA fight might not end with Supreme Court." Online. World Wide Web.http://www.pcweek.com:80/news/0310/14ecda.html. 23 Feb. 1997. f:\12000 essays\technology & computers (295)\POSItouch.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Convention and Group Sales Sunday, April 06, 1997 POSitouch The POSitouch system was conceived in 1982, by the Ted and Bill Fuller, owners of the Gregg's Restaurant chain. They were looking to increase the efficiency of there restaurants through the use of computer technology. During there search they found systems but none meeting there total needs. That is why the Fullers created the company, (R.D.C) Restaurant Data Concepts. RDC keeps developing better and more efficient equipment to be used in the food service industry. ADVANTAGES DISADVANTAGES 1.) Timely information, and speeds operations. 1.) People will become dependent on technology. So when it fails they will 2.) Tighter labor controls. probably not be trained or prepared to be with out it. 3.) No need to hire or pay a bookkeeper. 2.) Takes time to train people to work efficiently on POSitouch. 4.) Calculates food costs and menu mix. 3.) POSitouch is expensive to the small 5.) Tighter controls over orders taken. business owner. The smallest system Cuts down on free meals waiters give out. that they have installed cost under $10,000. 6.) Can order (via modem) and keep track of inventory. 7.) Built-in modem allows technical support via modem, and on line access to reports available at anytime, even historical reports.. 8.) Sales trend analysis. 9.) Credit Card authorization with draft capture. 10.) Easy to customize, to meet the needs of many different types of operations. 11.) Increased speed means, increased turnover. Overall, I feel that POSitouch is well worth the initial expense. It should be looked at as an investment, saving time, and money in all areas needing tight controls. This management tool has been shown to cut labor, and food costs in many food service establishments, not to mention the speed of the system, which could easily increase turnover. There is one important key that should be recognized for restaurants planning to utilize this system. Be prepared for technology to fail. If it fails the managers and staff should be capable of staying open without the POSitouch system. The thing that I like most about this system is that you can truly tell that it was developed by people in the food service industry, do to its completeness. f:\12000 essays\technology & computers (295)\Private Cable TV.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The times are a"changing... How France, Germany and Sweden introduced private, cable and satellite TV - a comparison over the past 10 years. 1. INTRODUCTION Why we have chosen this subject? Before starting to write about TV in Sweden, Germany and France, we wanted to compare French,German and Swedish media. But on account of the wideness of this analysis, we decided to focus on the evolution of TV broadcasting during these last 10 years. The technical revolution which has appeared in this area since 1980 is necessary to be understood to be able to follow and forecast what will happen in the future when multinational companies can take a look on pan-european broadcasting. In this paper we try to make the point on this changes. Furthermore as we came from different countries and live now in an other one, we found it interesting to compare the three countries (France, Germany and Sweden) TV- broadcasting system. While we were searching for datas, we discovered the gap that exists in cable-covering between France and the two other countries. What are the main reasons of this delay? Are they political, financial or cultural? We will try to answer these questions in our paper. But we will first define the different technical terms that we are going to focus on. Then we will developp the birth of private channels, their regulations, laws and financing in the different countries. 2. BASICS In our paper you will find the following technical terms: terrestrial broadcasting: this is the basic technology used to broadcast radio and TV. It"s the use of radio-frequencies that can be received by a simple antenna. The problem by using terrestrial broadcasting is, that you only have a few (up to max. 7) possible frequencies and that you need to have expensive transmitters every 100-150 kms to cover an area. Programms which are broadcasted terrestrical are e.g.: Swedish TV 1, 2 and 4; German ARD, ZDF, 3. Programme and some private channels in urban areas; French TF 1, France 2 and France 3. cable TV: the reason why you have only a few frequencies by using terrestrial broadcasting is that terestrial broadcasting is influenced by physical phenomens (bandwith) whereas broadcasting in a cable is shielded/protected from outside influences. So you can have more channels on the same bandwith-space. For example: a cable might carry 7 programmes catched with an antenna from terrestrical transmitters and additional 25 satellite channels (maximum 30-35 different channels in one cable). Instead of connecting to an antenna cable-households connect their TV-sets to the cable-network. satellite broadcasting: a satellite is a transmitter that is positioned on a course in space 40.000 kms far from earth. The advantage of this technology is to cover a wide area with only one transmitter. Modern direct broadcasting satellites (DBS, e.g. Astra) can be received by small (³ 30cm) and cheap (³ 2.000:- SKR) "satellite-dishes". To connect a TV-set to the "dish" you also need a device that converts the received satellite-signals to signals that can be used by a standard TV-set. In the beginning (80s) this technology needed huge and expensive dishes and was only used to transmit signals to cable-networks. Newer technology is often cheaper than connecting a house to a cable-network. In east-Germany the German PTT (Telekom) is competing with their cable-network against the cheap satellite-dishes. The most tranceiver-signals on DBS-Astra are booked by British (NBC- Super, MTV...) and German (RTL, SAT-1...) broadcasters. Satellites can also be used for telephone-connections, TV- or radio- broadcasting. 3. TV-BROADCASTING IN FRANCE 3.1 HISTORY TO BE FILLED WITH THE BEGINNING (PUBLIC TV 1930S - 1984) The first broadcasting tests happenned in the late 30Ôs like in Germany. It is only in 1945, after the second world war, that The Ordinance formalized the state monopoly of broadcasting which was assigned to Radiodiffusion de France. The Radiodiffusion de France has then included television in 1959 and became RTF (Radiodiffusion- Television de France). Established as a public company accountable to the Ministery of Information, RTF became an "Office" (ORTF) still supervised by the government. The events that happened in France in May 1968, have then helped the government to liberalize the medium. The government of information was therefore abolished and in 1974, an Act divided the ORTF in seven different public companies which formed the public broadcasting service : TF1, Antenne 2, FR3, Radio France, TDF, SFP, INA. Private channels emerge in France with Canal Plus the crypted-paying channel in 1984. This terrestical channel is owned by Havas. Canal Plus has to broadcast a daily clear program lasting from 45 minutes to 6 hours, the average is 3 hours and a half per day. In 1985 sees the birth of two new private channels France 5 and TV6 which were forbidden to broadcast the year after. Finally in 1987, they have refound the right to broadcast under the respective name La Cinq and M6. At this time, it already existed five public channels : TF1 (which is since 1987 privatized), A2 (rebaptised France 2 a generalist broadcasting television), FR3 (today called France 3, a national and regional TV), TV 5 Europe (European channel launched in 1983, transmits programmes broadcast in French-speaking countries by satellite) and RFO (transmits radio and TV programmes to French overseas territories and possessions). In may 1992, ARTE-La Sept, the Franco-German channel has started to broadcast on the French and German cable-net. Then when the private French channel, La Cinq, stopped broadcasting, ARTE was allowed to broadcast from 19h to 1h in the morning on this available frequence. The 13th of december 1994, has appeared a new public channel "La Cinquieme" also called "channel of knowledge" (la cha"ne du savoir) which is broadcasting on the same frequence as ARTE until 19h. To summarise, today the French TV-broadcasters are : public: France 2 private : France 3 M6 Arte TF 1 La Cinquieme Canal+ (pay-tv) RFO TV 5 3.2 CABLE/SATELLITE TV Cable channels were launched in France in 1984, 2% of the households were cabled. This initiative came from Minister Mauroy who presented cable as "a massive, consistent and orderly solution to satisfy multiple communication needs". In fact this cable plan met opposition of several parties. This was representing to high costs, and the state organization (DGT) assigned of the overall control control of the implementation of the new technology antagonized the manufacturers of cable equipment who proved unable to produce what was required within the agreed price and time. In 1986, the cable plan was definitevly abandonned. Around 10 private companies are now responsible for promoting the cable, for instance la compagnie g?n?rale de videocommunication, la Lyonnaise Communication, Eurocable ... It exists 25 local channels, 13 French channels are broadcasted, cable now reaches 25,3% of French households and the fee vary from 115:SKR to 400:SKR on account of the number of channels you wish receiving. It costs a lot of money for the company to share the cable in France as it requires the use of an expensive material such as the optical microfiber. Because of this cost, the cable net is now set for collectivity instead of individuals. Furthermore this installation can only be achieved on the will of the county otherwise the autorisation can not be received by the cable company. the commercial board of the cable society has to convince these communities. France ownes two direct-diffusing satellites : TDF 1 and TDF 2, and one telecommunication one : TELECOM 2A. Most of the programmes diffused through satellite are in fact the one you can get thanks to the cable. 3.3 LAWS AND REGULATIONS The C.S.A. (Conseil Sup?rieur de lÔ Audiovisuel) is the authority responsible in France for broadcastingÔs regulations. It is composed of 9 designed members : - three chosen by the President of Republique - three chosen by the President of Senat - three other by the President of National Assembly This institution is really politicised as we can see. It insures respect of pluralist expression of ideas, of French language and culture, of free competition, of quality and diversity of programs ... It also regulates the frequences gestion. It can interfer as well in the public as in the private sector. It gives the autorisations of exploitation of cable networks, satellite and terrestrial Television, M6 and Canal Plus for instance are allowed to broadcast for 10 years, then tehy have to renegociate their autorisation of broadcasting. Autorisations for CableTV last 20 years and can be allowed to companies or "regies" on local elected peopleÔs proposal. Furthermore French and foreign channels which want to broadcast on cable net need to sign a convention with the CSA. The implementation of the net is then under the Commune responsibility. The CSA makes also policy such as advertising to be respected. The time of advertising per hours is 12 mns. TF1 for instance has overpassed this allowance of 81 secondes and 94 secondes an other time and was therefore obliged to pay 2. 800.000,00 Ffr (4.000.000,00:SEK), which equals 16.000 Ffr per second (23.000,00:SEK). It also reuglates the political intervention on the public channel and made the law of the three third to be regarded. This regulation is that the channel in a political programm should respect 1/3 for the government, 1/3 for majority and 1/3 for opposition. 3.4 FINANCING 4. TV-BROADCASTING IN GERMANY 4.1 HISTORY The first TV-experiments in Germany were made in the 1930s to broadcast e.g. the Olympic Games. After World War II the harbinger of the first German TV-station ARD began broadcasting under allied control in 1949 in northern Germany and Northrhine-Westfalia under the responsibility of the NWDR-Laenderanstalt. The ARD is a broadcaster with only organizing functions for the "Laender"-based production facilities (Laenderanstalten, e.g. NDR, WDR...). Every part of the programm that is broadcasted under the label ARD is produced under the responsibility of a state-based station. The second german broadcaster ZDF is different from ARD. The ZDF produces TV on its own but the station is indirectly controlled by a conference of the states. There are also several regional "third" channels bound to the culture of one or more states which are only broadcastet within the states and are produced by the "Laenderanstalten". Private TV-programmes were introduced in 1984. You will find more about the introduction on the following page. There were 15 Germany- based TV-broadcasters in 1994. To summarise, today the Germany-based TV-broadcasters are : public: ARD private (general interest): ZDF RTL Arte (with F) Sat 1 3-Sat (with AU + CH) Pro7 DW-TV (foreign service) private (special interest): private (pay TV): Kabel 1 Premiere Vox Viva RTL 2 DSF n-tv Definitions on the next page! 4.2 CABLE/SATELLITE TV The German PTT developed as one of the first PTT"s in Europe standards in cabling private households. But in the late 70"s the social democrats (SPD) blocked the PTT because the Bonn government was afraid that cable technology would lead into private TV. After the changing the government in 1982 the new conservative government (CDU) and the minister for post and telecommunication Schwarz- Schilling invested in the new cable-technology. The first private TV-broadcasters (SAT-1 and RTLplus) got their license for a cable-trial-project in Ludwigshafen in 1984. After starting the Ludwigshafen project (estimated for 3 years duration) the countries with conservative majority allowed the PTT to broadcast the trial-programmes from the trial-projects in their regular cable-networks. This was the beginning of private TV in Germany and a trial-project became regular-service within a few months... . After a decision from the highest court in 1986 commercial TV was legal. The social democrats (SPD) changed their politics against private TV in the late 80"s and gave licenses to a few of the most important private broadcasters in states with a SPD majority. Now Koeln (Cologne) in the state of Northrhine-Westfalia (SPD) is one of the most important places for German media (RTL, Viva-TV, Vox) among the traditional "media-capitols" Hamburg and Muenchen. After unification in 1990 the PTT Telekom invested in cable Networks in the former GDR. But 1994 only 14 percent of all east-German households were connected to a cablenetwork and even terrestrical broadcasting still has not reached the "western" standard. For eastern Germany satelite-TV is very important. For this reason the German public broadcasters ARD and ZDF decided in 1992 to broadcast via the ASTRA-Sat to reach the eastern population. In 1993 the PTT signed a contract with the Luxemburg based ASTRA-Enterprises to become a associate member of this commercial organization. Since 1995 the Telekom is a private company and there are plans to provide technology for digital and pay-TV in the future. 17 % of all east-German households and 11% of all west-German hh have a satellite-dish (1993). More than 90% of the german-sat-dishes are focused on the Astra-Sat. Connected to a cablenetwork are 48% (west) and 14% (east) of all households. In some urban areas free terrestrial frequencies are licensed to a few private channels (RTL, Sat 1, Pro 7). Local TV is very new in Germany, the first License was given by the states Berlin and Brandenburg to "1A-Brandenburg" in 1993 for the towns Potsdam and Berlin. There are also some projects in state financed open channels in several cable networks. 4.3 LAWS AND REGULATIONS Among the three countries we compare, Germany is the only country running a "federal system". Media in general are underlying rules and laws by the decentralized several state-governments within the Federal Republic of Germany. Also the public broadcasters are ruled by the several states (Laender) and the private channels get their Licenses from the states. The reason for the decentralized broadcasting system in Germany is the German "Grundgesetz", the Basic Law that guarantees the "cultural sovereinty" of the staates. This Basic Law protects the media from possible political interests a central (Bonn or Berlin based) government might have. Even the fees for the public-broadcasters are fixed by decissions from a conference of the federal states. The only exception now is the Deutsche Welle (DW-TV), a broadcaster for foreign countries which is used as a "ambassador" for german culture and is under special government-regulation. In the 80s all German states drafted private-media laws. Now every state has the legal possibility to give licenses to commercial TV- stations. The supervisory body for Licenses in each state is called "Landesmedienanstalt". Because of the decentralised German system all laws and regulations concerning commercial broadcasters are connected to the "cultural sovereinty" of the states. To avoid that a private broadcaster has to license his programm in every of the 16 German states all states signed a contract (Staatsvertrag). This contract guarantees e.g. that each state will accept the license given by a Landesmedienanstalt in a single German state. In this contract are also fixed regulations about ownership, content of programmes and the possibility for each "Landesmediananstalt" to accuse decisions made in an other state. Each Landesmedienanstalt is also responsible for the decission which programmes are allowed to be broadcasted in the PTT-cable-network in their state (normally: 1. stations licenced within the state, 2. stations licenced in other states, 3. foreign stations). Another important assignment of the Landesmedienanstalt is to watch the german media-ownership-regulations. There are special quotations in ownership which have to be controlled. The strongest regulation is that no one is allowed to hold more than 50% on an broadcaster. An other important mechanism is the declaration of a channel, there are declarations as "special interest" (only one topic, e.g. sport, movies), "general interest" (with information/news) and "pay TV". The most important german media-investors are Bertelsmann (RTL, Premiere) and the Kirch-Group (Sat 1, Kabel 1, Pro 7). Both groups are accused to violate the ownership and monopoly-law that will be renewed within this year. Because of the relative liberal-license-law in 1994 more than 10 new entrepeneurs anounced to apply for a german TV-license (e.g. Disney). 5. SWEDEN 5.1 HISTORY Unlike Germany and France where they started with experimental TV- broadcasting in the late 30"s Sweden launched its first channel in 1956. But like in France and Germany the state had a monopoly on broadcasting. The first Swedish channel was Channel 1 the second channel (TV 2) was launched in 1969. Since 1987 the two public television channels have been organized in such a way that TV 1 is based on programme production in Stockholm and and TV 2 on production in ten TV districts in the provinces. The first two private Swedish channels where introduced in Sweden in 1987 by satellite and cable. TV 3 and Filmnet-pay TV are swedish owned but were not allowed and licensed to send on terrestrial frequencies so they transmit via satellite and cable. In 1989 the third satellite broadcaster the Nordic Channel was launched and two more pay-TV channels, TV 1000 and SF-Succ? where introduced to the market. TV 1000 and Succ? merged two years later. The first private channel licensed to transmitt terrestrial within Sweden was TV 4 in 1991. To summarise, today the Swedish TV-broadcasters are : public: TV 1 private : TV 3 TV 2 TV 4 TV 5 Nordic (pay-tv) TV 1000 (pay-tv) 5.2 CABLE AND SAT The construction of cable networks begann in 1984. This share was supposed to bring 3 000 employments perr year for 7 years and was a mean to protect telephone monopoly. Now Sweden is among the european countries with the most cable subscribers (B, NL, CH). Up to 50% of all households in sweden have acces to the cable and 7% own a satellite-dish Like in France the cable-networks gave a chance for local stations. Advertising is not allowed for these local stations so they have a lack of money and often broadcast only a few hours a day. Local-TV is provided in circa 30 towns and can be seen by 16% of all Swedes (1993). Satellite installation was given birth in the middle of the 1970Ôs through an agreement among the five Nordic countries to launch NORDSAT. This satellite would inforce the cooperation between these countries and also helpes to promote nordic culture. In fact this project died and a Tele-X was launched by Sweden and Norway, then Finland joined the project. Nowadays 60 % of the Swedish households have access to the satellite channels. 5.3 LAWS AND REGULATIONS -cable transmission legislation 1992 In Sweden, the Radio Act and the Enabling Agreement between the braodcasting companies and the State are leading broadcasting policies The State exercise no control over the programms prior to broadcasting. However a Broadcasting council is empowered to raise objections to specific programms. The Cable Law -The two Swedish public channels are financed by a license fee. 6. CONCLUSION In the times of public-tv the few possible frequencies for terrestrical-broadcasting where used by the very few public channels in each country. These channels were under control of the state and not connected to financiel interests of owners or investors. With the beginning of the 80s the invention of cable TV made broadcasting from up to 30 channels possible. Our governments had to face the demand for TV-licenses and also had to invest in cable- infrastructure. In the late 80s new direct broadcasting satelites gave the same number of channels to households in less developed regions. One thing we found out and can face now as a major fact is that there is no cable-infrastructure in France and only a few commercial channels (compared to the 57 million inhibitants). The market seems to be influenced by the default of the state to provide cable access. For some reasons we can"t evaluate from sweden in a few weeks how the "sleeping beauty" France managed not to develop a cable-network. But we can compare the facts for all three countries and conclude: -dual system in all 3 countries (public and private tv since mid 80s) -tv is important in all countries 97% (see chart) -pay tv is introduced in all countries 7. QUESTIONS TO THE CLASS -maybe there is no demand for cable in France? -will the public channels survive? -we only evaluated quantity and historical information and facts- what about quality? f:\12000 essays\technology & computers (295)\Procedures Parameters & SubPrograms.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Procedures, Parameters & Sub-Programs In any modern programming language, procedures play a vital role in the construction of any new software. These days, procedures are used instead of the old constructs of GOTO and GOSUB, which have since become obsolete. Procedures provide a number of important features for the modern software engineer:- Programs are easier to write. Procedures save a large amount of time during software development as the programmer only needs to code a procedure once, but can use it a number of times. Procedures are especially useful in recursive algorithms where the same piece of code has to be executed over and over again. The use of procedures allows a large and complex program to be broken up into a number of much smaller parts, each accomplished by a procedure. Procedures also provide a form of abstraction as all the programmer has to do is know how to call a procedure and what it does, not how it accomplishes the task. Programs are easier to read. Procedures help to make programs shorter, and thus easier to read, by replacing long sequences of statements with one simple procedure call. By choosing goo procedure names, even the names of the procedures help to document the program and make it easier to understand. Programs are easier to modify. When repeated actions are replaced by one procedure call, it becomes much easier to modify the code at a later stage, and also correct any errors. By building up the program in a modular fashion via procedures it becomes much easier to update and replace sections of the program at a later date, if all the code for the specific section is in a particular module. Programs take less time to compile. Replacing a sequence of statements with once simple procedure call usually reduces the compilation time of the program, so long as the program contains more than one reference to the procedure! Object programs require less memory. Procedures reduce the memory consumption of the program in two ways. Firstly they reduce code duplication as the code only needs to be stored once, but the procedure can be called many times. Secondly, procedures allow more efficient storage of data, because storage for a procedure's variables is allocated when the procedure is called and deallocated when it returns. We can divide procedures into two groups:- Function procedures, are procedures which compute a single value and whose calls appear in expressions For example, the procedure ABS is a function procedure, when given a number x, ABS computes the absolute value of x; a call of ABS appears in an expression, representing the value that ABS computes. Proper procedures, are procedures whose calls are statements For example, the procedure INC is a proper procedure. A call of INC is a statement; executing INC changes the value stored in a variable. Procedures have only one real disadvantage: Executing a procedure requires extra time because of the extra work that must be done both when the procedure is called, and when it returns. Most of the time however, the advantages of using procedures heavily outweigh this minor disadvantage. Most procedures depend on data that varies from one call to the next, and for this reason, Modula-2 allows a procedure heading to include a list of identifiers that represent variables or expressions to supply when calling the procedure. The programmer can use these identifiers, known as formal parameters, in the body of the procedure in the same fashion as ordinary variables. A call of a procedure with parameters must include a list of actual parameters. The number of actual parameters must be the same as the number of formal parameters. Correspondence between the actual and formal parameters is by position, so the first actual parameter corresponds to the first formal parameter, the second actual parameter corresponds to the second formal parameter, and so on. The type of each actual parameter must match the type of the corresponding formal parameter. Modula-2 provides to kinds of formal parameters:- Variable parameters. In a procedure heading, if the reserved word VAR precedes a formal parameter, then it is a variable parameter. Any changes made to a variable parameter within the body of the procedure also affect the corresponding actual parameter in the main body of the program. Value parameters. If the reserved word VAR does not precede a formal parameter then it is a value parameter. If a formal parameter is a value parameter, the corresponding actual parameter is protected from change, no matter what changes are made to the corresponding parameter in the procedure. To sum up, variable parameters allow information to flow both into and/or out of a procedure, whereas value parameters are one way and only allow information into a procedure. Most Modula-2 systems allow a program to "call" a program module as if it were a procedure. We call a module used in this way a subprogram module or just a subprogram. The commands for calling another program are not part of Modula-2 itself, but are provided by a procedure in a library module. The command used in most Modula-2 systems for calling a sub-program is CALL, and a number of parameters are usually passed along with this procedure so as to allow the two programs to communicate with each other, but there is no way to supply parameters to a subprogram. The parameters passed only indicate things like whether the sub-program was executed correctly and did not terminate early because of an error. The primary reason for using subprograms is to reduce the amount of memory required to execute a program. If a program is too large to fit into memory, the programmer can often identify one or more modules, that need not exist simultaneously. The main module can then call these modules as subprograms when needed. Once a subprogram has completed execution, it returns control to the main program, which can then call another sub program. All subprograms share the same area of memory, and because only one is resident at a time, the memory requirements of the overall program are greatly reduced. f:\12000 essays\technology & computers (295)\PROJECT MANAGEMENT IN COMPUTER.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Overview: The purpose of this document is to identify a proposal, which will allow Midnight Auto Supply to become upgraded with a state of the art computer system. Midnight Auto Supply currently have great potentials for capturing a large percentage of the auto part supply business in the St. Louis Metropolitan area, and believe the business will benefit from automating its operation. Midnight Auto Supply is in need of a computer system that will handle all of its current daily management activities, in a faster and more efficient manner. Presently, Midnight Auto Supply is operating out of three locations, the main store and headquarters is located in Manchester, Missouri; the other two locations are in St. Charles, Missouri and Afton, Missouri. After implementing this new computer system, Midnight Auto Supply will have the capability to successfully compete or surpass all of its current competitors in the St. Louis metropolitan area. My company, the ABC Software Solution, propose that an A1 system be developed, which will allow all three stores to become fully automated. The A1 system will grant Midnight Auto Supply the ability to keep all their daily records and activities in real time mode; along with ensuring that all daily management activities are produced in a more effective and efficient manner. The A1 system will also accomplish the following tasks: 1. Keep an accurate status of inventory, by part number. 2. Determine the location of all parts (Manchester, St. Charles and Afton locations). 3. Maintain a list and cross reference by manufacture names, part class and part number. 4. Generate purchase orders. 5. Perform all payroll functions, which includes issuing paychecks and preparing address labels for mailing checks. 6. Maintain all account payable information. 7. Maintain all account receivable data. Problem: Midnight Auto Supply is a growing company in the auto supply industry, which consist of three stores located in the St. Louis metropolitan area. Midnight Auto Supply currently does all of its reports, purchases, accounts receivable, accounts payable and payroll functions manually. Midnight Auto Supply is constantly having problems in keeping track of its incoming parts. Midnight Auto Supply's accounts receivable and accounts payable department is having a very difficult time keeping accurate records manually and expeditiously. Midnight Auto Supply currently has no idea what its inventory is composed of at each of its three store locations. Midnight Auto Supply is a small company with a limited budget, and is growing fast, they are eager to rectify their existing problems through the use of automation. Solution: After reading and analyzing the project information sheet, ABC Software Solution recommends the following: Using the Access Database Management System to build the database (see Overview), all three stores will be linked via the Internet. The database will be backup ever fifteen minutes. Each store will have a dedicated T1 telephone line connected to the client/server. The software Access Database Management package will be used to allow for real time processing, which will automatically update the database. All payroll functions, will be done at the headquarters store in Manchester include processing.rters location. Account payable and account receivable transactions will be prepared at the headquarters . Each store will have its own personal computer, which will serve as a client/server and workstation, along with two hand held scanners connected to the personal computer and the workstation. The hand held scanners will be used to scan in inventory at each store, and all purchase and return transaction will be scanned into the computer. ABC Software Solutions recommend setting up and using a web page via the Internet, so that prospective customers can place orders 24 hours a day, 7 days a week. The World Wide Web (WWW) will also allow Midnight Auto Supply to have the capability to contact its supplier via the Internet to place orders. ABC Software Solution will install in each store, a one NEC 9624 Pentium Plus as its client server, with a 200 MHz processor, a 15' Multisync XV15+ monitor, a 4.0 gigabyte hard drive, a NEC standard keyboard with hot keys to access the database immediately, a Hewlett Packard Deskjet 820Cse printer and one workstation with hot keys. Our company chose the Access Database software to build the database for all support (see overview), because of its user friendliness, stability, and its compatibility to interface with most software. Access is extremely flexible and our analyst can build a customized database to handle your company's daily management requirement. SYSTEM CAPABILITIES {Each location will have the following software} MASS STORAGE DEVICES HARD DRIVE: · NAME: NEC READY 9625 · INTEL PENTIUM PRO PROCESSOR (200 MHz) · 32 MB OF RAM EXPANABLE TO 128 · 4.0 GIGABYTE HARD DRIVE · 12V MULTISPIN CD-ROM READER · 56.6/14.4 KBPS VOICE/DATA/FAX/MODEM (BOCA) · MPEG FULL MOTION DIGITAL VIDEO · 512 PIPELINE BURST · 2 ISA, 2PCI 1PCI/ISA EXPANSION SLOTS MONITOR · NEC 15' MULTI-SYNC XV15+ · 640 X 480: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB · 800 X 600: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB · 1024 X 768: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB KEYBOARD · NEC WINDOW 95 104-KEY ENHANCED KEYBOARD WITH FUNCTION KEYS PRINTER · HEWLETT PACKARD DESK JET 820Cse PROFESSIONAL SERIES PRINTER · PRINTS 6.5 PPM BLACK, 4 PPM COLOR · PRINTS ALL TYPES OF FORMS, LABELS, CHECKS AND LETTERHEAD SOFTWARE · MICROSOFT WINWDOWS 97 · ACCESS (for building the customized database) · NETSCAPE (Internet software) SYSTEM: · WILL DETERMINE LOCATION OF ALL PARTS FOR ALL THREE STORES · WILL MAINTAIN ALL PAYROLL FUNCTIONS · WILL MAINTAIN ALL ACCOUNT PAYABLE DATA · WILL MAINTAIN ALL ACCOUNT RECEIVABLE DATA · WILL MAINTAIN A LIST OF ALL MANUFACTURES BY NAME, PART CLASSIFICATION AND PART NUMBER · WILL KEEP A CURRENT STATUS OF PURCHASE ORDERS · WILL KEEP A CURRENT STATUS OF ALL INVENTORY BY PART NUMBER Company Information: ABC Software Solution will send a consultant/analyst to your headquarters store and observe/interview the employees at Midnight Auto Supply. The data obtained from the observation/interview will allow us to design the best product for your company. After the observation/interview, the consultant/analyst will convene back at ABC headquarters and participated in a round table meeting to discuss the outcome and design phase. The critical information is listed in the Requirement Document (RD). Our staff will assist the employees at Midnight Auto Supply in writing a full and complete Requirement Document. THE PHASE PLAN (SEE GANTT CHART FOR BEGINNING AND ENDING DATES OF EACH PHASE) DEFINITION: With the signature of Mr. Valve or the company's designated representative, ABC Software Solution will help write the RD, (see Solution). An analyst will assist your staff in writing the RD to ensure that all details have been covered. ANALYSIS: During this phase, a Functional Specification (FS) will be established, which will consist of a cost analysis, a milestone schedule, a milestone deliverables, all with a fully written detail description. Our analyst will assist your staff in writing the (FS). A signature on the FS will be required prior to the start of the Design phase. The prototype will begin at this phase. DESIGN: A test database will be designed, which will include menus and screens. A demonstration and sample data will be provided to the Midnight Auto Supply staff, in order to ensure that all specifications have been met. After your company staff have closely analyzed and reviewed the demonstration and sample data, a signature of approval will be required by Mr. Valve or the company's designated representative. This phase will be frozen once the signature has been authorized and any changes after this stage can result in a substantial increase in the firm fixed price and may change the schedule of completion date. Acceptance Plan Test (APT): The APT will require a signature from Mr. Valve or the company's designated representative, which will approve all the above phases (Definition, Analysis, Design). Once all modules have been completed for integration and testing, there will be a final system review. If all phases are accepted, a signature from Mr. Valve or the company's designated representative will be required for approval. PROGRAMMING: The programming modules will be designed as follows: Mod A is for menu & screens Mod B is for payroll system Mod C is for accounting system Mod D is for purchase order system Mod E is for the supply system Mod F is for inventory Mod G is printing checks Mod H will interface of Mod A-G SYSTEM TEST: The Programmer, Test Engineers and the Project Manager will test the system for quality assurance. At this phase all the integrated systems will be working together properly. If any unforeseen problems exist any and all adjustment will be made. ACCEPTANCE: All terms from the APT will be implemented. The employees and staff at Midnight Auto Supply will be given a demonstration of the complete system. If any unforeseen problems occur, additional programming changes will be applied at this time. A signature from Mr. Valve or the designated company's representative will be needed at this time. DELIVERABLES Computer: The first delivery will be at the Headquarters store in Manchester. We will install the 15' NEC Multisync XV15+ Monitor, the Hewlett Packard printer, the NEC standard keyboard and the hand held scanner. The operation system (WINDOWS 97) , along with the prototype software, Netscape (web page) as well as all of the custom written software (ACCESS) will be installed at the operation and installation phase during the second week of March 1998 and will be completed during the first two weeks of April 1998. Hardware: The Pentium Pro Process with 32 MB of RAM expandable to 128 MB. The expansion of RAM will be sufficient for the next 4 to 5 years. The A1 has a 4.0 gigabyte hard drive which includes a 56.6 data/fax modem from BOCA. At the current time the 4.0 gigabyte is the latest and the largest on the market. The 56.6 data/fax modem is currently the fastest on the market, it will take your company well into the future. The 12V Multispin CD ROM Reader is also the fastest and the best in quality on the market today. MONITOR: NEC 15' Multisync xv15+ with a .28mm dot pitch. KEY BOARD: NEC Standard with function keys. PRINTER: Hewlett Packard Deskjet 820 Cse Pro Series with 800 dpi. SOFTWARE: The software will be written with the signature of the Analysis Phase (1st week of Aug 1998). WARRANTIES: All NEC 9624 is backed by one year limited warranty which includes one year on-site service and 90 days software support. Warranty Plus: Optional extended warranty, includes three years on-site service at a cost of $260.00 per PC (monitor, keyboard, PC server, scanner & printer) additional item will cost $25.00 for scanner, the work station will cost $160.00. The total cost for the additional warranty is $1300.00. Documentation and Training: A full set of documentation along with the user manuals will be provided during the operation and installation phase. The operation and installation is schedule to start on March 1, 1998. During the training phase, our project manager will train each store for 1 week. The last week we will have a analyst at each store to make all finalization. OPERATION: Midnight Auto Supply's automated system (A1) will be installed at the Headquarters store first. Once implemented, the St. Charles and Afton stores will be completed within fourteen working days. During the training, a full set of documentation and user manual, will be issued at each site. The project manager will begin a training session at each store. Each session will take five working days to complete and consist of how to properly startup the A1 system, backup the database, restore/recover data and shutdown procedures. The training session will also cover use of the scanner devices. PROPOSE SCOPE FOR MIDNIGHT AUTO SUPPLY ABC Software Solutions is writing this solution to Midnight Auto Supply to show the benefits of using our A1 automated system. The A1 system will provide the means for Midnight Auto Supply to track its daily activities and management all of its inventory. This state of the art system will make payroll much simpler and federal, state and all 401k activities automated. The A1 system will automated the headquarters, the Afton and the St. Charles stores by doing all account receivable, account payable, purchase orders. The A1 system will also list all manufacture prices, parts and list the manufacture with the least to most costly part. After reading the project information sheet, we at ABC Software Solution realized your needs and feel that the A1 system will be what your company need. STAFF Project Manager: Wilbert E. Brownlow Programmer: Personnel will be assigned upon signature. The levels are as follows: SYSTEM FLOW CHART FOR MIDNIGHT AUTO SUPPLY Overview: The purpose of this document is to identify a proposal, which will allow Midnight Auto Supply to become upgraded with a state of the art computer system. Midnight Auto Supply currently have great potentials for capturing a large percentage of the auto part supply business in the St. Louis Metropolitan area, and believe the business will benefit from automating its operation. Midnight Auto Supply is in need of a computer system that will handle all of its current daily management activities, in a faster and more efficient manner. Presently, Midnight Auto Supply is operating out of three locations, the main store and headquarters is located in Manchester, Missouri; the other two locations are in St. Charles, Missouri and Afton, Missouri. After implementing this new computer system, Midnight Auto Supply will have the capability to successfully compete or surpass all of its current competitors in the St. Louis metropolitan area. My company, the ABC Software Solution, propose that an A1 system be developed, which will allow all three stores to become fully automated. The A1 system will grant Midnight Auto Supply the ability to keep all their daily records and activities in real time mode; along with ensuring that all daily management activities are produced in a more effective and efficient manner. The A1 system will also accomplish the following tasks: 1. Keep an accurate status of inventory, by part number. 2. Determine the location of all parts (Manchester, St. Charles and Afton locations). 3. Maintain a list and cross reference by manufacture names, part class and part number. 4. Generate purchase orders. 5. Perform all payroll functions, which includes issuing paychecks and preparing address labels for mailing checks. 6. Maintain all account payable information. 7. Maintain all account receivable data. Problem: Midnight Auto Supply is a growing company in the auto supply industry, which consist of three stores located in the St. Louis metropolitan area. Midnight Auto Supply currently does all of its reports, purchases, accounts receivable, accounts payable and payroll functions manually. Midnight Auto Supply is constantly having problems in keeping track of its incoming parts. Midnight Auto Supply's accounts receivable and accounts payable department is having a very difficult time keeping accurate records manually and expeditiously. Midnight Auto Supply currently has no idea what its inventory is composed of at each of its three store locations. Midnight Auto Supply is a small company with a limited budget, and is growing fast, they are eager to rectify their existing problems through the use of automation. Solution: After reading and analyzing the project information sheet, ABC Software Solution recommends the following: Using the Access Database Management System to build the database (see Overview), all three stores will be linked via the Internet. The database will be backup ever fifteen minutes. Each store will have a dedicated T1 telephone line connected to the client/server. The software Access Database Management package will be used to allow for real time processing, which will automatically update the database. All payroll functions, will be done at the headquarters store in Manchester include processing.rters location. Account payable and account receivable transactions will be prepared at the headquarters . Each store will have its own personal computer, which will serve as a client/server and workstation, along with two hand held scanners connected to the personal computer and the workstation. The hand held scanners will be used to scan in inventory at each store, and all purchase and return transaction will be scanned into the computer. ABC Software Solutions recommend setting up and using a web page via the Internet, so that prospective customers can place orders 24 hours a day, 7 days a week. The World Wide Web (WWW) will also allow Midnight Auto Supply to have the capability to contact its supplier via the Internet to place orders. ABC Software Solution will install in each store, a one NEC 9624 Pentium Plus as its client server, with a 200 MHz processor, a 15' Multisync XV15+ monitor, a 4.0 gigabyte hard drive, a NEC standard keyboard with hot keys to access the database immediately, a Hewlett Packard Deskjet 820Cse printer and one workstation with hot keys. Our company chose the Access Database software to build the database for all support (see overview), because of its user friendliness, stability, and its compatibility to interface with most software. Access is extremely flexible and our analyst can build a customized database to handle your company's daily management requirement. SYSTEM CAPABILITIES {Each location will have the following software} MASS STORAGE DEVICES HARD DRIVE: · NAME: NEC READY 9625 · INTEL PENTIUM PRO PROCESSOR (200 MHz) · 32 MB OF RAM EXPANABLE TO 128 · 4.0 GIGABYTE HARD DRIVE · 12V MULTISPIN CD-ROM READER · 56.6/14.4 KBPS VOICE/DATA/FAX/MODEM (BOCA) · MPEG FULL MOTION DIGITAL VIDEO · 512 PIPELINE BURST · 2 ISA, 2PCI 1PCI/ISA EXPANSION SLOTS MONITOR · NEC 15' MULTI-SYNC XV15+ · 640 X 480: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB · 800 X 600: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB · 1024 X 768: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB KEYBOARD · NEC WINDOW 95 104-KEY ENHANCED KEYBOARD WITH FUNCTION KEYS PRINTER · HEWLETT PACKARD DESK JET 820Cse PROFESSIONAL SERIES PRINTER · PRINTS 6.5 PPM BLACK, 4 PPM COLOR · PRINTS ALL TYPES OF FORMS, LABELS, CHECKS AND LETTERHEAD SOFTWARE · MICROSOFT WINWDOWS 97 · ACCESS (for building the customized database) · NETSCAPE (Internet software) SYSTEM: · WILL DETERMINE LOCATION OF ALL PARTS FOR ALL THREE STORES · WILL MAINTAIN ALL PAYROLL FUNCTIONS · WILL MAINTAIN ALL ACCOUNT PAYABLE DATA · WILL MAINTAIN ALL ACCOUNT RECEIVABLE DATA · WILL MAINTAIN A LIST OF ALL MANUFACTURES BY NAME, PART CLASSIFICATION AND PART NUMBER · WILL KEEP A CURRENT STATUS OF PURCHASE ORDERS · WILL KEEP A CURRENT STATUS OF ALL INVENTORY BY PART NUMBER Company Information: ABC Software Solution will send a consultant/analyst to your headquarters store and observe/interview the employees at Midnight Auto Supply. The data obtained from the observation/interview will allow us to design the best product for your company. After the observation/interview, the consultant/analyst will convene back at ABC headquarters and participated in a round table meeting to discuss the outcome and design phase. The critical information is listed in the Requirement Document (RD). Our staff will assist the employees at Midnight Auto Supply in writing a full and complete Requirement Document. THE PHASE PLAN (SEE GANTT CHART FOR BEGINNING AND ENDING DATES OF EACH PHASE) DEFINITION: With the signature of Mr. Valve or the company's designated representative, ABC Software Solution will help write the RD, (see Solution). An analyst will assist your staff in writing the RD to ensure that all details have been covered. ANALYSIS: During this phase, a Functional Specification (FS) will be established, which will consist of a cost analysis, a milestone schedule, a milestone deliverables, all with a fully written detail description. Our analyst will assist your staff in writing the (FS). A signature on the FS will be required prior to the start of the Design phase. The prototype will begin at this phase. DESIGN: A test database will be designed, which will include menus and screens. A demonstration and sample data will be provided to the Midnight Auto Supply staff, in order to ensure that all specifications have been met. After your company staff have closely analyzed and reviewed the demonstration and sample data, a signature of approval will be required by Mr. Valve or the company's designated representative. This phase will be frozen once the signature has been authorized and any changes after this stage can result in a substantial increase in the firm fixed price and may change the schedule of completion date. Acceptance Plan Test (APT): The APT will require a signature from Mr. Valve or the company's designated representative, which will approve all the above phases (Definition, Analysis, Design). Once all modules have been completed for integration and testing, there will be a final system review. If all phases are accepted, a signature from Mr. Valve or the company's designated representative will be required for approval. PROGRAMMING: The programming modules will be designed as follows: Mod A is for menu & screens Mod B is for payroll system Mod C is for accounting system Mod D is for purchase order system Mod E is for the supply system Mod F is for inventory Mod G is printing checks Mod H will interface of Mod A-G SYSTEM TEST: The Programmer, Test Engineers and the Project Manager will test the system for quality assurance. At this phase all the integrated systems will be working together properly. If any unforeseen problems exist any and all adjustment will be made. ACCEPTANCE: All terms from the APT will be implemented. The employees and staff at Midnight Auto Supply will be given a demonstration of the complete system. If any unforeseen problems occur, additional programming changes will be applied at this time. A signature from Mr. Valve or the designated company's representative will be needed at this time. DELIVERABLES Computer: The first delivery will be at the Headquarters store in Manchester. We will install the 15' NEC Multisync XV15+ Monitor, the Hewlett Packard printer, the NEC standard keyboard and the hand held scanner. The operation system (WINDOWS 97) , along with the prototype software, Netscape (web page) as well as all of the custom written software (ACCESS) will be installed at the operation and installation phase during the second week of March 1998 and will be completed during the first two weeks of April 1998. Hardware: The Pentium Pro Process with 32 MB of RAM expandable to 128 MB. The expansion of RAM will be sufficient for the next 4 to 5 years. The A1 has a 4.0 gigabyte hard drive which includes a 56.6 data/fax modem from BOCA. At the current time the 4.0 gigabyte is the latest and the largest on the market. The 56.6 data/fax modem is currently the fastest on the market, it will take your company well into the future. The 12V Multispin CD ROM Reader is also the fastest and the best in quality on the market today. MONITOR: NEC 15' Multisync xv15+ with a .28mm dot pitch. KEY BOARD: NEC Standard with function keys. PRINTER: Hewlett Packard Deskjet 820 Cse Pro Series with 800 dpi. SOFTWARE: The software will be written with the signature of the Analysis Phase (1st week of Aug 1998). WARRANTIES: All NEC 9624 is backed by one year limited warranty which includes one year on-site service and 90 days software support. Warranty Plus: Optional extended warranty, includes three years on-site service at a cost of $260.00 per PC (monitor, keyboard, PC server, scanner & printer) additional item will cost $25.00 for scanner, the work station will cost $160.00. The total cost for the additional warranty is $1300.00. Documentation and Training: A full set of documentation along with the user manuals will be provided during the operation and installation phase. The operation and installation is schedule to start on March 1, 1998. During the training phase, our project manager will train each store for 1 week. The last week we will have a analyst at each store to make all finalization. OPERATION: Midnight Auto Supply's automated system (A1) will be installed at the Headquarters store first. Once implemented, the St. Charles and Afton stores will be completed within fourteen working days. During the training, a full set of documentation and user manual, will be issued at each site. The project manager will begin a training session at each store. Each session will take five working days to complete and consist of how to properly startup the A1 system, backup the database, restore/recover data and shutdown procedures. The training session will also cover use of the scanner devices. PROPOSE SCOPE FOR MIDNIGHT AUTO SUPPLY ABC Software Solutions is writing this solution to Midnight Auto Supply to show the benefits of using our A1 automated system. The A1 system will provide the means for Midnight Auto Supply to track its daily activities and management all of its inventory. This state of the art system will make payroll much simpler and federal, state and all 401k activities automated. The A1 system will automated the headquarters, the Afton and the St. Charles stores by doing all account receivable, account payable, purchase orders. The A1 system will also list all manufacture prices, parts and list the manufacture with the least to most costly part. After reading the project information sheet, we at ABC Software Solution realized your needs and feel that the A1 system will be what your company need. STAFF Project Manager: Wilbert E. Brownlow Programmer: Personnel will be assigned upon signature. The levels are as follows: SYSTEM FLOW CHART FOR MIDNIGHT AUTO SUPPLY Overview: The purpose of this document is to identify a proposal, which will allow Midnight Auto Supply to become upgraded with a state of the art computer system. Midnight Auto Supply currently have great potentials for capturing a large percentage of the auto part supply business in the St. Louis Metropolitan area, and believe the business will benefit from automating its operation. Midnight Auto Supply is in need of a computer system that will handle all of its current daily management activities, in a faster and more efficient manner. Presently, Midnight Auto Supply is operating out of three locations, the main store and headquarters is located in Manchester, Missouri; the other two locations are in St. Charles, Missouri and Afton, Missouri. After implementing this new computer system, Midnight Auto Supply will have the capability to successfully compete or surpass all of its current competitors in the St. Louis metropolitan area. My company, the ABC Software Solution, propose that an A1 system be developed, which will allow all three stores to become fully automated. The A1 system will grant Midnight Auto Supply the ability to keep all their daily records and activities in real time mode; along with ensuring that all daily management activities are produced in a more effective and efficient manner. The A1 system will also accomplish the following tasks: 1. Keep an accurate status of inventory, by part number. 2. Determine the location of all parts (Manchester, St. Charles and Afton locations). 3. Maintain a list and cross reference by manufacture names, part class and part number. 4. Generate purchase orders. 5. Perform all payroll functions, which includes issuing paychecks and preparing address labels for mailing checks. 6. Maintain all account payable information. 7. Maintain all account receivable data. Problem: Midnight Auto Supply is a growing company in the auto supply industry, which consist of three stores located in the St. Louis metropolitan area. Midnight Auto Supply currently does all of its reports, purchases, accounts receivable, accounts payable and payroll functions manually. Midnight Auto Supply is constantly having problems in keeping track of its incoming parts. Midnight Auto Supply's accounts receivable and accounts payable department is having a very difficult time keeping accurate records manually and expeditiously. Midnight Auto Supply currently has no idea what its inventory is composed of at each of its three store locations. Midnight Auto Supply is a small company with a limited budget, and is growing fast, they are eager to rectify their existing problems through the use of automation. Solution: After reading and analyzing the project information sheet, ABC Software Solution recommends the following: Using the Access Database Management System to build the database (see Overview), all three stores will be linked via the Internet. The database will be backup ever fifteen minutes. Each store will have a dedicated T1 telephone line connected to the client/server. The software Access Database Management package will be used to allow for real time processing, which will automatically update the database. All payroll functions, will be done at the headquarters store in Manchester include proce f:\12000 essays\technology & computers (295)\Propaganda in the Online Free Speech Campaign.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Propaganda in the Online Free Speech Campaign Propaganda and Mass Communication July 1, 1996 In February 1996, President Bill Clinton signed into law the Telecommunications Act of 1996, the first revision of our country's communications laws in 62 years. This historic event has been greeted with primarily positive responses by most people and companies. Most of the Telecommunications act sets out to transform the television, telephone, and related industries by lowering regulatory barriers, and creating law that corresponds with the current technology of today and tomorrow. One part of the Telecommunications act, however, is designed to create regulatory barriers within computer networks, and this has not been greeted with admirable commentary. This one part is called the Communications Decency Act (CDA), and it has been challenged in court from the moment it was passed into law. Many of the opponents of the CDA have taken their messages to the Internet in order to gain support for their cause, and a small number of these organizations claim this fight as their only cause. Some of these organizations are broad based civil liberties groups, some fight for freedom of speech based on the first amendment, and other groups favor the lowering of laws involving the use of encrypted data on computers. All of these groups, however, speak out for free speech on the Internet, and all of these groups have utilized the Internet to spread propaganda to further this common cause of online free speech and opposition to the CDA. Context in which the propaganda occurs Five years ago, most people had never heard of the Internet, but today the Internet is a term familiar to most people even if they are not exactly sure about what the Internet is. Along with the concept of the Internet, it is widely known that pornography and other adult related materials seem to be readily available on the Internet, and this seems to be a problem with most people. Indeed, it does not take long for even a novice Internet user to search out adult materials such as photographs, short movies, text based stories and live discussions, chat rooms, sexual aide advertisements, sound files, and even live nude video. The completely novel and sudden appearance of the widely accessible Internet combined with the previously existing issues associated with adult materials has caused a great debate around the world about what should be done. The major concern is that children will gain access to materials that should be reserved only for adults. Additionally, there is concern that the Internet is being used for illegal activities such as child pornography. In response to the concerns of many people, the government enacted the Communications Decency Act which attempts to curtail these problems by defining what speech is unacceptable online and setting guidelines for fines and prosecution of people or businesses found guilty of breaking this law. While the goal of keeping children from gaining access to pornography is a noble one that few would challenge, the problem is that the CDA has opened a can of worms for the computer world. Proponents of the CDA claim that the CDA is necessary because the Internet is so huge that the government is needed to help curb the interaction of adult materials and children. Opponents of the CDA claim that the wording of the CDA is so vague that, for example, an online discussion of abortion would be illegal under the new law, and our first amendment rights would therefore be pulled out from under us. Opponents also argue that Internet censorship should be done at home by parents, not by the government, and that things such as child pornography are illegal anyway, so there is no need to re-state this in a new law. At this point, the battle lines have been drawn and like everything else in society, everyone is headed into the courtroom to debate it out. While this happens, the propagandists have set up shop on the Internet. In terms of a debate about the first amendment and the restriction of free speech, this current battle is nothing new. The debate over free speech has been going on for as long as people have been around, and in America many great court cases have been fought over free speech. The Internet's new and adolescent status does not exclude it from problems. Just as all other forms of mass communication have been tested in the realms of free speech and propaganda, so will the Internet. Identity of the propagandists There are scores of online groups that work to promote free speech on the Internet, but there are a few who stand out because of the scope of their activities, their large presence on the Internet, and their apparently large numbers of supporters. The Electronic Frontier Foundation (EFF) is today one of the most visual online players in the fight against the CDA, but was established only in 1990 as a non-profit organization before the Internet started to gain its status as a daily part of our lives. Mitchell D. Kapor, founder of Lotus Development Corporation, along with his colleague John Perry Barlow, established the EFF to "address social and legal issues arising from the impact on society of the increasingly pervasive use of computers as a means of communication and information distribution." In addition, the EFF also notes that it "will support litigation in the public interest to preserve, protect and extend First Amendment rights within the realm of computing and telecommunications technology." Also in the press release that announced the formation of the EFF, Kapor said, "It is becoming increasingly obvious that the rate of technology advancement in communications is far outpacing the establishment of appropriate cultural, legal and political frameworks to handle the issues that are arising." Clearly, the EFF is very up-front and open about its belief that the American legal system is currently not equipped to handle the daily reliance and use of computers in society, and that the EFF will facilitate in handling problems in the area of litigation and computers. Initial funding of the EFF was provided in part by a private contribution from Steve Wozniak, the co-founder of Apple Computer, and since then contributions have come from industry giants such as AT&T, Microsoft, Netscape Communications, Apple Computer, IBM, Ziff-Davis Publishing, Sun Microsystems, and the Newspaper Association of America. It is likely that these companies see the need for assistance when the computer world collides with the world of law, and also see the EFF as one way for the rights of the computer industry and its customers to be upheld. A second player in the area of online free speech protection is the Center for Democracy and Technology (CDT). The CDT, founded in 1994, is less up-front about their history and funding, but states that its mission is to, "develop public policies that preserve and advance democratic values and constitutional civil liberties on the Internet and other interactive communications media." Like the EFF, the CDT is located in Washington, DC, and is a non-profit group funded by, according to the 1996 annual report, "individuals, foundations, and a broad cross section of the computer and communications industry." A third major player in the online free speech movement is The Citizens Internet Empowerment Coalition (CIEC, pronounced "seek"). This is the group who filed the original lawsuit against the US Department of Justice and Attorney General Janet Reno to overturn the CDA based on, in part, the use of the word "indecent". The plaintiffs in this lawsuit are a very diverse group, and include many who are also cited as contributors to the EFF. Some of these plaintiffs include the American Booksellers Association, the Freedom to Read Foundation, Apple Computer, Microsoft, America Online, the Society of Professional Journalists, and Wired magazine. In their appeal to gain new members, CIEC states that they are, "a coalition of Internet users, businesses, non-profit organizations and civil liberties advocates formed to challenge the constitutionality of the Communications Decency Act because they believe it violates their free speech rights and condemns the Internet to a future of burdensome censorship and government intrusion." Like the CDT, CIEC does not directly state what organizations support their cause or how much money is changing hands, but based on the companies supporting the lawsuit filed by the CIEC, it is almost certain that the same computer and publishing related companies are paying for CIEC's existence. Finally, unlike other groups which are activists for several causes, CIEC has the one and only mission of challenging the CDA and does not claim to have any other purpose. Ideology and purpose behind the campaign There are several interrelated reasons motivating the online free speech movement. The most visual, and therefore one of the most obvious, reasons for the online presence of the free speech movement is to sign up new supporters. Current technology of the Internet is ideal for gathering information from people without inconveniencing them. While exploring the Internet in the privacy of one's own home, it takes only seconds to type in your name, address, and other information so that it can be sent to the headquarters of an organization. When compared to the traditional process of walking into a traditional storefront, talking with a human, and then writing out your membership information on paper, this new electronic method is superior. A person can become an online free speech supporter at 2am while sitting in his or her underwear and eating leftovers while sitting at home without having to worry about talking to a pushy recruiter. Because of this ease of gathering information, it is possible for an organization to quickly recruit large numbers of members. Also, in terms of the demographics of the members, the mere fact that they are signing up online generates a certain, desirable demographic group of people. Even though computers are becoming easier to use every day, the majority of Internet users are educated and tend to have higher incomes than the average. At the head of CIEC's page where new members are encouraged to sign up, there is a large banner proclaiming, "Over 47,000 Individual Internet Users Have Joined as of June 17, 1996!". This particular technique of announcing the number of new recruits is popular among various online organizations who recruit new members because it lets the user know that he is not alone. The user will see the large number and know that he or she will be part of a large group of supporters and therefore feel safe about signing up with the cause. Once an individual gets "in the door" of an online free speech website, he or she is encouraged to become a member or supporter, but why are the supporters needed? I believe that when presented in a legal setting, these large membership lists can be used to demonstrate that numerous people do exist who are in favor of the online free speech campaign. Just as people vote for laws or politicians, membership lists demonstrate that people have "voted" for this cause. While a membership list is not quite as powerful as an election, it does show that real "everyday" people support this cause. When the online free speech campaign takes the CDA case to the Supreme Court, it will be armed with long lists of people who support what these organizations are trying to do, and the knowledge of all of the supporters could be just enough to tilt the judges' decision in the right direction. Another purpose behind the online free speech campaigns is to attract more businesses to the effort. When, for example, a software company who advertises on the Net proclaims to be a supporter of the movement, then the movement gets free advertising. When the names of computer companies such as Microsoft and Apple are mentioned in the introductory and sign up information, other companies might feel the urge to join because of the "me too" effect in which the smaller companies look up to the bigger companies and might tend to adopt the policies of the giants. For example, if YYZ Software knows that Microsoft is supporting the free speech online movement, YYZ might feel important if it supports the cause too. While the number company owners or managers browsing a site will be much smaller than the number of individual people looking at the same site, this idea of throwing around the name of famous companies is an attempt to attract at least some supporters. Even though only a small number of supporters could be gained through this channel, it is still a channel, and therefore important no matter how small. Also, if this method happens to bring a large company into the group, then the organization could gain great financial support. While it is likely that all the Netscapes and IBMs of the world are already aware of the online free speech movement, new companies and new fortunes are made frequently in the fast moving world of the computer industry, so an unknown company today could be a key player tomorrow. It is, therefore, important for the online free speech movement to be constantly recruiting new companies, because the need for large financial backers never ends, and you never know when a mom and pop operation today will be the next Microsoft tomorrow. Another motivation behind the campaign is the protection of businesses and their interests. For example, a new online magazine for scientists in the biomedical field is being formed, and the company behind the venture, Current Science, is investing between $7.5 and $9 million in the project (Rothstein). With money like this at risk, it is obvious that freedom of speech must be secured in order for ventures like this to work. Finally, the ultimate goal for all groups is the repeal of the CDA, but the deletion of the CDA does not mean the end of free speech problems on the Internet, so these groups will always exist in some form or another. Just as there is an ongoing debate about what books are appropriate for who, there will always be a debate about what Internet content is appropriate for who. Add to this the global aspect of the Internet, and the scope and complexity of the issue can be envisioned. Target audience The clever, or perhaps just convenient aspect about online free speech propaganda is that the propaganda is located at the very same spot that the debate is about. In other words, if you want to promote free speech, go to where the speech is taking place- the Internet. By promoting propaganda online about online free speech, you are directly targeting the audience you want to target. People who do not utilize the Internet will be less interested than those who do, so it makes sense to locate your campaign on the Internet, where the people there will naturally be more concerned about computer censorship issues. An added bonus of the Internet is its relatively low cost compared to traditional media outlets such as print or radio, so not only are these groups promoting their causes almost directly to the people they want to reach, they are doing it at a very low cost compared with more traditional methods. On the other hand, these online free speech organizations have little, if any propaganda outside of the Internet, so they are therefore not reaching the maximum number of possible people. While they all maintain traditional offices, phone numbers, postal mailing addresses, and fax numbers, they are virtually unknown by the populace outside of the Internet. While purchasing print or television advertisements might not be as direct and monetarily efficient as utilizing the Internet to promote propaganda, those traditional methods would help get the word out to the largest number of people.. Just as all other forms of mass media have been utilized for the spread of propaganda, so will the Internet. Media utilization techniques This section is by far the most interesting because it deals primarily with the actual examples and techniques of propaganda used by the online free speech movement. While the propaganda of these groups is primarily limited to the electronic realm of the Internet, it is important to remember that the Internet is itself a multimedia tool. Unlike newspaper, for example, the Internet can convey words, pictures, sound, and moving video. As an added dimension, these forms can vary in unlimited colors, intensities, qualities and quantities so that the viewer does not always know what to expect. The important propagandistic idea of utilizing all available channels to maximize the effect of propaganda is certainly at use here. My first involvement with the online free speech movement, and the reason why I decided to investigate this topic, was the Blue Ribbon Campaign. Almost a year ago, I began to notice the occurrence of the same blue ribbon icon on many different Internet web locations and homepages. These icons are similar to the red AIDS awareness ribbon in terms of their appearance and function, and the actual size of the icon in most locations is typically only about 8 mm high by 25 wide. Of course this size depends on several computer specific variables, but the point is that the Blue Ribbon Campaign icon is small so that it appears quickly without taking much transfer time. The people behind the Blue Ribbon icon knew that if they created a large space and time hogging image, that people would become frustrated with the lethargic image and fail to gain respect for it. However, in reality, this small icon is tiny and unobtrusive so that its appearance on a web page is not bothersome. The idea of using a blue ribbon is smart because of the association with the AIDS red ribbon campaign. While people have different opinions about homosexuality, most people, if not all, agree that aids must be stopped. Using this logic, it makes sense to utilize this almost universal appeal of the red ribbon by the creation of a blue ribbon. Additionally, the red ribbon icon is very well established and is widely recognized, so once again, the adoption of a similar blue ribbon icon is smart. The genius of the Internet's world wide web is the use of hyperlinks or hypertext. Hypertext is the system of allowing the reader to click on something and be instantly transported to another location that relates to what he or she clicked on. Every time a Blue Ribbon Campaign icon exists on the world wide web, it contains the Internet homepage address of the Electronic Frontier Foundation, one of the key players in the online free speech movement. Therefore, by clicking on the Blue Ribbon icon, the reader is instantly transferred to EFF's homepage. When compared again to the AIDS red ribbon movement, the advantage of the Internet system are obvious. When one sees a person wearing an AIDS red ribbon, he or she can not automatically and instantaneously receive information about AIDS. The person would have to ask the red ribbon wearer for a phone number or address where AIDS information could be found. With the Blue Ribbon Campaign, however, the information is instant, and it fits right in with today's fast moving society. A person can see the Blue Ribbon icon, and can immediately see what it means. There is no time for the person to lose interest due to making a phone call or waiting for a postal letter to be delivered. Therefore on a daily basis I was seeing the Blue Ribbon Campaign icons, and several times I clicked on those icons in order to gain more information about this symbol that kept popping up all over the place. If, on a particular day, I was not in the mood to learn about the EFF, I could easily go back to what I was doing before I clicked on the blue ribbon icon. However, since the icon kept appearing at various web sites, there were times when I did feel like exploring this interesting phenomenon further, and because the blue ribbon icon was easy to run across, it was easy for me to enter the EFF and see what they had to offer. The EFF's homepages do contain a brief history of the organization, but there is no information about the actual origin of the Blue Ribbon Campaign. According to electronic mail I received from Dennis Derryberry at the EFF after querying about the origin of the Blue Ribbon Campaign: The Blue Ribbon Campaign does not belong to any specific group; it is shared by all groups and individuals who value and support free speech online. I believe the idea originally was sparked by a woman who has been helping us with membership functions, but amid all the expansion of the campaign, we kind of forgot where it really came from. I guess that's just the spirit of a campaign for the benefit of the many. (Derryberry) Even if the Blue Ribbon Campaign does not belong to any one group, it was originated by the EFF and all of the blue ribbon icons point back to the EFF. One of the first options of things to do when one first sees the EFF's opening page is to join the EFF, the Blue Ribbon Campaign, or both.. Joining the Blue Ribbon Campaign is simple, and basically involves just giving them a small amount of personal information and then copying one of several blue ribbon icons to be used on your web site. There are many, many different blue ribbons available of all different sizes and compositions, but they all revolve around the basic blue ribbon idea. If a user is not fully pleased with the online selection if available icons, there is an option to receive information about many others that are available. Finally, it is also possible to create your own blue ribbon icon and allow the EFF to give it away to be used for the same cause. This entire emphasis on the graphic image of the campaign is a smart move because people's interest is aroused by images more than words. If the words "Blue Ribbon Campaign" were seen everywhere, the impact would be less dramatic than the colored image of the blue ribbon that accompanies these words. Even though the doorway to the EFF is graphic based, the bulk of the EFF's web site contains document after document of textual information that all relates to the CDA and freedom of speech. Also located here is the entire text of the Telecommunications Act of 1996, including all text of the CDA. Internet users who click on the blue ribbon icon will be taken directly to the part of the EFF's website that deals with the Blue Ribbon Campaign. Because the Blue Ribbon Campaign is not the only cause the EFF supports, there is of course much more to the EFF's website than just this. Some of the sections of the EFF's homepage are: The Blue Ribbon Campaign section on the EFF's homepage is set apart from the other areas by use of the traditional blue ribbon icon. This section begins with a link to the newest information about the CDA, and then goes on to list links to several things including introductory information about the campaign, federal, state, and local information, an archive of past information, examples of Internet sites that could be banned under the CDA, activism information, and finally a "Skeptical?" link to a page that tries to convince skeptics about believing the EFF's cause. About EFF is the first thing that new visitors to the site will want to read. This contains a brief history of the organization and answers most of the questions people might have. This area also goes into the beliefs and motivations behind the EFF. Action Alerts is a list of current events that the EFF is currently monitoring. For example, one of the most recent action alerts deals with the latest decision on the CDA. This section also encourages people to take action in the Blue Ribbon Campaign and provides a list of various ways to help. At the top of the list there is a disclaimer about civil disobedience being "at least nominally illegal". Some of the suggested activities include: supporting a 28th amendment to the U.S. Constitution to extend First Amendment rights to the Internet, attend rallies, wear T-shirts that promote free speech online, put a real blue ribbon pin on your backpack if you are a student, etc.. This section also contains a list of previous example of protest and demonstration of CDA opposition, so show that people have actually gone out to stand up for the things that are promoted on this site. Guide to the Internet is a document that helps acquaint novices with the Internet in general, and does not contain any EFF or free speech related specific material. While this seems pretty innocent, its purpose here is a bit deeper. If more people can become more familiar with the Internet, then more people will use the Internet and therefore hopefully become interested in online free speech. Archive index is an essential tool on the EFF website because of the large number of different documents available here. This is a searchable index that aides users in finding specific information contained in the EFF pages. For example, if you wanted to see if the word "pornography" occurred in the CDA, you could search for it. Newsletter is a section that contains the current and past newsletters of the EFF. These newsletters are updates about things the EFF is currently involved with. I think that although much of the information contained in these newsletters is redundant in that it can be found elsewhere on the site, there are two reasons for this. First, the newsletter format is one that everyone is familiar with. If a person is new to the EFF site and sees the "newsletter" section, he or she will automatically have a general idea how information will be presented in this format, and it will therefore be easier and more welcoming to read than other types of information. Secondly, the newsletter is important because it is repeated information. One key aspect of propaganda is repetition, so the duplication of certain information in the newsletter accomplishes that. Calendar is a listing of future events and dates that are important to EFF. Many of the listings here are protest rallies and schedule speeches that look good when many people attend. This provides a consolidated listing of dates that is easy to access, without having to search all over the site for things. Also, the information here is available for download so that it can be put into a person's personal time management software on his or her own computer. This gives the EFF an indirect link to remind you where to go and when. Job openings provides information about applying to the EFF for a job with the EFF. Merchandise lets members and nonmembers purchase T-shirts and metal Blue Ribbon Campaign pins to help spread the word. Awards gives a list of the 19 awards won by the EFF for various things such as "Best of the Web" and "Top 250 Lycos Sites". The display of these awards legitimizes the organization and shows to others that many people are visiting this site. Staff Homepages at first seems somewhat boring, but this section is actually a list of the staff, in rank order, and a short description of what each person does at the EFF. Clicking on the person's name takes you to their homepage. This display of information once again reinforces the idea of white propaganda that the EFF uses. Miscellaneous contains a sponsors list, other publications of interest, and EFF related images, sounds, and animations. A second example of online free speech propaganda on the Internet is a homepage promoting the lawsuit filed by The Citizens Internet Empowerment Coalition (CIEC, "seek") against the U.S. Department of Justice and Attorney General Janet Reno. This page is designed to look like a 1700's handbill or poster and to arouse emotions of patriotism and fighting for one's country. It would be difficult for an American to view this document and not be reminded of how we fought for our freedom from the English. Icons of patriots shouting out loud, canons and American flags, and pictorial representations of the Constitution all arouse emotions of fighting for what is right. This page also contains an 4 minute audio clip that is available for download. This audio is Judith Krug of the American Libraries Association speaking about the censorship of libraries. The reader has to only click on the icon and the audio will be transferred to his or her computer and the user listens to the audio as it is transmitted. Aside from these audio and visual messages, this site is similar to the EFF's in that it contains lots of information and links to related anti-CDA sites. Another website that utilizes propaganda is operated by the Center for Democracy and Technology (CDT). This site is one of many that utilizes an animated "Free Speech" icon that displays fireworks exploding in the air. Like other examples, this too is very patriotic. Also like other sites, the CDT displays various Internet awards they have won, as well as the number of people they have signed up who support the lawsuit against the CDA. Counter propaganda While there are groups and people who favor the CDA, there is very little propaganda promoting these beliefs. Part of the reason for this is that the whole debate over the CDA seems to be a very nonpartisan issue in terms of Republicans and Democrats. If this had been a partisan issue, there would certainly be propaganda on both sides. The main reason that little counter propaganda exists is that the CDA is the law, so people who are for it have already been appeased to a certain extent. The anti-CDA groups are protesting and using propaganda because the CDA is the law, and they want it changed. As with many things in life, it is more common to hear complaints from people who are not satisfied than from people who are ple f:\12000 essays\technology & computers (295)\Protecting A Computer.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ About two hundred years before, the word "computer" started to appear in the dictionary. Some people even didn't know what is a computer. However, most of the people today not just knowing what is a computer, but understand how to use a computer. Therefore, computer become more and more popular and important to our society. We can use computer everywhere and they are very useful and helpful to our life. The speed and accuracy of computer made people felt confident and reliable. Therefore, many important information or data are saved in the computer. Such as your diary, the financial situation of a oil company or some secret intelligence of the military department. A lot of important information can be found in the memory of computer. So, people may ask a question: Can we make sure that the information in the computer is safe and nobody can steal it from the memory of the computer? Physical hazard is one of the causes of destroying the data in the computer. For example, send a flood of coffee toward a personal computer. The hard disk of the computer could be endangered by the flood of coffee. Besides, human caretaker of computer system can cause as much as harm as any physical hazard. For example, a cashier in a bank can transfer some money from one of his customer's account to his own account. Nonetheless, the most dangerous thief are not those who work with computer every day, but youthful amateurs who experiment at night --- the hackers. The term "hacker "may have originated at M.I.T. as students' jargon for classmates who labored nights in the computer lab. In the beginning, hackers are not so dangerous at all. They just stole computer time from the university. However, in the early 1980s, hackers became a group of criminals who steal information from other peoples' computer. For preventing the hackers and other criminals, people need to set up a good security system to protect the data in the computer. The most important thing is that we cannot allow those hackers and criminals entering our computers. It means that we need to design a lock to lock up all our data or using identification to verify the identity of someone seeking access to our computers. The most common method to lock up the data is using a password system. Passwords are a multi-user computer system's usual first line of defense against hackers. We can use a combination of alphabetic and number characters to form our own password. The longer the password, the more possibilities a hacker's password-guessing program must work through. However it is difficult to remember a very long passwords. So people will try to write the password down and it may immediately make it a security risk. Furthermore, a high speed password-guessing program can find out a password easily. Therefore, it is not enough for a computer that just have a password system to protect its data and memory. Besides password system, a computer company may consider about the security of its information centre. In the past, people used locks and keys to limit access to secure areas. However, keys can be stolen or copies easily. Therefore, card-key are designed to prevent the situation above. Three types of card-keys are commonly used by banks, computer centers and government departments. Each of this card-keys can employ an identifying number or password that is encoded in the card itself, and all are produced by techniques beyond the reach of the average computer criminals. One of the three card-key is called watermark magnetic. It was inspired by the watermarks on paper currency.The card's magnetic strip have a 12-digit number code and it cannot be copied. It can store about two thousand bits in the magnetic strip. The other two cards have the capability of storing thousands of times of data in the magnetic strip. They are optical memory cards (OMCs) and Smart cards. Both of them are always used in the security system of computers. However, it is not enough for just using password system and card-keys to protect the memory in the computer. A computer system also need to have a restricting program to verify the identity of the users. Generally, identity can be established by something a person knows, such as a password or something a person has, such as a card-key. However, people are often forget their passwords or lose their keys. A third method must be used. It is using something a person has --- physical trait of a human being. We can use a new technology called biometric device to identify the person who wants to use your computer. Biometric devices are instrument that perform mathematical analyses of biological characteristics. For example, voices, fingerprint and geometry of the hand can be used for identification. Nowadays, many computer centers, bank vaults, military installations and other sensitive areas have considered to use biometric security system. It is because the rate of mistaken acceptance of outsiders and the rejection of authorized insiders is extremely low. Individuality of vocal signature is one kind of biometric security system. The main point of this system is voice verification. The voice verifier described here is a developmental system at American Telephone and Telegraph. Only one thing that people need to do is repeating a particular phrase several times. The computer would sample, digitize and store what you said. After that, it will built up a voice signature and make allowances for an individual's characteristic variations. The theory of voice verification is very simple. It is using the characteristics of a voice: its acoustic strength. To isolate personal characteristics within these fluctuations, the computer breaks the sound into its component frequencies and analyzes how they are distributed. If someone wants to steal some information from your computer, the person needs to have a same voice as you and it is impossible. Besides using voices for identification, we can use fingerprint to verify a person's identity because no two fingerprints are exactly alike. In a fingerprint verification system, the user places one finger on a glass plate; light flashes inside the machine, reflects off the fingerprint and is picked up by an optical scanner. The scanner transmits the information to the computer for analysis. After that, security experts can verify the identity of that person by those information. Finally, the last biometric security system is the geometry of the hand. In that system, the computer system uses a sophisticated scanning device to record the measurements of each person's hand. With an overhead light shining down on the hand, a sensor underneath the plate scans the fingers through the glass slots, recording light intensity from the fingertips to the webbing where the fingers join the palm. After passing the investigation of the computer, people can use the computer or retrieve data from the computer. Although a lot of security system have invented in our world, they are useless if people always think that stealing information is not a serious crime. Therefore, people need to pay more attention on computer crime and fight against those hackers, instead of using a lot of computer security systems to protect the computer. Why do we need to protect our computers ? It is a question which people always ask in 18th century. However, every person knows the importance and useful of a computer security system. In 19th century, computer become more and more important and helpful. You can input a large amount of information or data in a small memory chip of a personal computer. The hard disk of a computer system is liked a bank. It contained a lot of costly material. Such as your diary, the financial situation of a trading company or some secret military information. Therefore, it just like hire some security guards to protect the bank. A computer security system can use to prevent the outflow of the information in the national defense industry or the personal diary in your computer. Nevertheless, there is the price that one might expect to pay for the tool of security: equipment ranging from locks on doors to computerized gate-keepers that stand watch against hackers, special software that prevents employees to steal the data from the company's computer. The bill can range from hundreds of dollars to many millions, depending on the degree of assurance sought. Although it needs to spend a lot of money to create a computer security system, it worth to make it. It is because the data in a computer can be easily erased or destroyed by a lot of kind of hazards. For example, a power supply problem or a fire accident can destroy all the data in a computer company. In 1987, a computer centre inside the Pentagon, the US military's sprawling head quarters near Washington, DC., a 300-Watt light bulb once was left burning inside a vault where computer tapes were stored. After a time, the bulb had generated so much heat that the ceiling began to smelt. When the door was opened, air rushing into the room brought the fire to life. Before the flames could be extinguished, they had spread consume three computer systems worth a total of $6.3 million. Besides those accidental hazards, human is a great cause of the outflows of data from the computer. There have two kind of people can go in the security system and steal the data from it. One is those trusted employee who is designed to let in the computer system, such as programmers, operators or managers. Another kind is those youth amateurs who experiment at night ----the hackers. Let's talk about those trusted workers. They are the groups who can easily become a criminal directly or indirectly. They may steal the information in the system and sell it to someone else for a great profit. In another hand, they may be bribed by someone who want to steal the data. It is because it may cost a criminal far less in time and money to bride a disloyal employee to crack the security system. Beside those disloyal workers, hacker is also very dangerous. The term "hacker" is originated at M.I.T. as students' jargon for classmates who doing computer lab in the night. In the beginning, hackers are not so dangerous at all. They just stole some hints for the test in the university. However, in early 1980s, hacker became a group of criminal who steal information from other commercial companies or government departments. What can we use to protect the computer ? We have talked about the reasons of the use of computer security system. But what kind of tools can we use to protect the computer. The most common one is a password system. Password are a multi-user computer system's which usual used for the first line of defense against intrusion. A password may be any combination of alphabetic and numeric characters, to maximum lengths set by the e particular system. Most system can accommodate passwords up to 40 characters. However, a long passwords can be easily forget. So, people may write it down and it immediately make a security risk. Some people may use their first name or a significant word. With a dictionary of 2000 common names, for instance, a experienced hacker can crack it within ten minutes. Besides the password system, card-keys are also commonly used. Each kind of card-keys can employ an identifying number or password that is encoded in the card itself, and all are produced by techniques beyond the reach of the average computer criminal. Three types of card usually used. They are magnetic watermark, Optical memory card and Smart card. However, both of the tools can be easily knew or stole by other people. Password are often forgotten by the users and card-key can be copied or stolen. Therefore, we need to have a higher level of computer security system. Biometric device is the one which have a safer protection for the computer. It can reduce the probability of the mistaken acceptance of outsider to extremely low. Biometric devices are instrument that perform mathematical analyses of biological characteristics. However, the time required to pass the system should not be too long. Also, it should not give inconvenience to the user. For example, the system require people to remove their shoes and socks for footprint verification. Individuality of vocal signature is one kind of biometry security system. They are still in the experimental stage, reliable computer systems for voice verification would be useful for both on-site and remote user identification. The voice verifier described here is invented by the developmental system at American Telephone and Telegraph. Enrollment would require the user to repeat a particular phrase several times. The computer would sample, digitize and store each reading of the phrase and then, from the data, build a voice signature that would make allowances for an individual's characteristic variations. Another biometric device is a device which can measuring the act of writing. The device included a biometric pen and a sensor pad. The pen can converts a signature into a set of three electrical signals by one pressure sensor and two acceleration sensors. The pressure sensor can change in the writer's downward pressure on the pen point. The two acceleration sensor can measure the vertical and horizontal movement. The third device which we want to talk about is a device which can scan the pattern in the eyes. This device is using an infrared beam which can scan the retina in a circular path. The detector in the eyepiece of the device can measure the intensity of the light as it is reflected from different points. Because blood vessels do not absorb and reflect the same quantities of infrared as the surrounding tissue, the eyepiece sensor records the vessels as an intricate dark pattern against a lighter background. The device samples light intensity at 320 points around the path of the scan , producing a digital profile of the vessel pattern. The enrollment can take as little as 30 seconds and verification can be even faster. Therefore, user can pass the system quickly and the system can reject those hackers accurately. The last device that we want to discuss is a device which can map the intricacies of a fingerprint. In the verification system, the user places one finger on a glass plate; light flashes inside the machine ,reflect off the fingerprint and is picked up by an optical scanner. The scanner transmits the information to the computer for analysis. Although scientist have invented many kind of computer security systems, no combination of technologies promises unbreakable security. Experts in the field agree that someone with sufficient resources can crack almost any computer defense. Therefore, the most important thing is the conduct of the people. If everyone in this world have a good conduct and behavior, there is no need to use any complicated security system to protect the computer. f:\12000 essays\technology & computers (295)\Quality Issues in Systems Development.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The period between the 1970's and 1980's was a time of great advancement in computer hardware technology which took an industry still in it's infancy, to a level of much sophistication and which ultimately revelutionised the information storage and processing needs of every other industry and that of the entire world. However, it was also during this period when the shortcomings of implementing such technology became apparent. A significant number of development projects failed which resulted with disastrous consequences, not only of an economic nature, but social aswell. Seemingly, although hardware technolgy was readily available and ever improving, what was inhibiting the industry was in the methods of implementing large systems. Consequently, all kinds of limited approaches materialized that avoided the costs and risks inherent in big-systems developments. Times have changed, and with it our understanding and experience as how best to develop large systems. Today's large systems yield greater benefits for less cost than those of previous decades. Large systems provide better, more timely information, the ability to integrate and correlate internal and external information, the ability to integrate and facilitate streamlined business processes. Unfortunately, not every system that information workers develop are well implemented; this means that the computer system which was originally intended to make a company more efficient, productive and cost-effective, is in the end doing the exact opposite - namely, wasting time, money and valuable manpower. So even with all the lessons learned from the 70's and 80's, our vastly superior methodologies and knowledge of the 90's is still proving to be fallible, as suggested in the following examples. System Development Failures In Britain, 1993, an incident occurred which forced the London Ambulance Service to abandon its emergency system after it performed disastrously on delivery, causing delays in answering calls. An independent inquiry ordered by British government agencies found that the ambulance service had accepted a suspiciously low bid from a small and inexperienced supplier. The inquiry report, released in February 1993, determined that the system was far too small to cope with the data load. For an emergency service, the system error would not only cause the loss of money, but more essentially, fail to dispatch ambulances correctly and promptly upon the arising of critical situations. Thus, the implications of such a failure are apparently obvious, both socially and economically. Since the failures, the ambulance service has reverted to a paper-based system that will remain in place for the foreseeable future. Another failure was the collapse of the Taurus trading system of the London Stock Exchange. Taurus would have replaced the shuffling of six sorts of paper among three places over two weeks - which is how transactions in shares are settled in London-with a computerized system able to settle trades in three days. The five-year Taurus development effort, which sources estimated cost hundreds of millions of dollars, was termed a disaster, and the project was abandoned in March 1993. Exchange officials have acknowledged that the failure put the future of the Exchange in danger. Why did they fail? What went wrong with these systems? The real failure in the case of the London Stock Exchange was managerial, both at the exchange and among member firms. The exchange's bosses gave the project managers too much rope, allowing them to fiddle with specifications and bring in too many outside consultants and computer firms. Its new board, having heavy-weight and diverse membership, proved too remote from the project. Member firms that spent years griping about Taurus's cost and delays did not communicate their doubts concerning the project. The Bank of England, a strong Taurus supporter, failed to ask enough questions, despite having had to rescue the exchange's earlier attempt to computerize settlement of the gilts market. According to Meredith , an expert in project management issues, many system development catastrophes begin with the selection of a low bidder to do a project, even though most procurement rules state that cost should be only one of several criteria of designation. The software failure occurs because the companies involved did not do a risk assessment prior to starting a project. In addition, many companies do not study the problems experienced in earlier software development projects, so they cannot apply that data when implementing new projects. Another source of problems is the failure to measure the quality of output during the development process. Information workers as yet have not fully understood the relationship that exists between information and development. It is shown that information should be viewed as one of the essential know-how resources. The value and necessity of information for development is argued. An attempt is made to classify the various areas where information is needed for development, as well as the information systems and infrastructures available or required to provide for the different needs. There are a number of reasons why information has not yet played a significant role in development. One reason is that planners, developers and governments do not yet acknowledge the role of information as a basic resource. Another is that the quality of existing information services is such that they cannot yet make an effective contribution to information provision for development. Avoiding development failure Companies blame their unfinished system projects on such factors as poor technology, excessive budgets, and lack of employee interest. Yet, all these factors can be easily avoided. All that is needed to develop and implement successful systems is a strong corporate commitment and a basic formula which has proven effective time after time. By following the guidelines below, any system workers can install and implement a successful, efficient system quickly and with minimal disruption to the workplace. Understand your workplace-every company must fully understand its existing environment in order to successfully change it. Define a vision for the future- This objective view will help the company develop a clear vision of the future. Share the vision- In order for the system to be successful, all those who are involved in its development must fully buy into the process and end-product. This will also help further define specific goals and expectations. Organize a steering committee-This committee, which must be headed by the executive who is most affected by the success or failure of the project, has to be committed and involved throughout all stages. Develop a plan-The project plan should represent the path to the vision and finely detail the major stages of the project, while still allowing room for refinement along the way. Select a Team of users- A sampling of company employees is important to help create, and then test, the system. In the Laboratory systems failure case . That means both the vendor and laboratory should identify what users know and what they need to know to get the best out of the LIS. They must also develop a formal training plan before selecting a system. Create a prototype-Before investing major dollars into building the system, consider investing in the development of a prototype or mock system which physically represents the end product. This is similar in concept to an architect's model, which allows one to actually touch and feel the end product before it is created. Have the users actually develop the system- It is the end-users who will directly benefit from the system, so why not let them have a hand in developing it? In the DME is DBA case , the fault that the Open Software Foundation(OSF) make it's Distributed Management Environment system fail is the OSF tried to go from theory to perfect product without the real-would trial and error that is so critical to technology development. Build the solution-With a model in place, building the solution is relatively easy for the programmer. Users continue to play an important role at this stage ensuring smooth communication and accurate user requirement. Implement the system-Testing the system, training and learning new procedures can now begin. Because the majority of time up until now has been spend planning and organizing, implementation should be smooth and natural, and most importantly quick. The Role of SAA and ACS in the Assurance of Quality The Standards Association of Australia was established in 1922 as the Australian Commonwealth Engineering Standards Association. Their original focus was on engineering, subsequently it expanded to include manufacturing standards, product specifications and quality assurance and consumer-related standards. The role the SAA play is in quality certification. According to SAA, a standard is a published document which sets out the minimum requirements necessary to ensure that a material, product, or method will do the job it is intended to do. For systems development, both the Standards Association of Australia and Australian Computer Society give the guides and standards to develop a system and to control the quality of a system and to prevent failure from occurring. They also make the standard of the system developed connectable world wide. When software development projects fail, they usually fail in a big way. For large development projects, the cost is typically astronomical, both in terms of dollars spent and human resources consumed, some with even further reaching implications effecting adversely the whole of a society. Too often, mistakes made in developing one project are perpetuated in subsequent ones. As with the error which occurred in the London Stock Exchange system, what they should have done was find out how the system allowed the error to happen and fix it, then learn from it for making better developments for future information systems. Bibliography: 1. Fail-safe Advice, Software Magazine, George Black, 3/93 2. All fall down, The Economist, Anonymous, 20/3/93 3. DME is DBA(Dead Before Arrival), Data Communications, Robin Layland, 2/94 4. There's No Excuse for Failure, Canadian Manager, Selim EI Raheb, 9/92 5. Laboratory Systems failure: The enemy may be us, Computers in Healthcare, Stanley J. Geyer, M.D., 9/93 6. Australian Standard Software quality management system, Standards Australia f:\12000 essays\technology & computers (295)\Questions of Ethics in Computer Systems and their Future.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1) Identify and discuss security issues and considerations evident for Information Systems And computerization in the brokerage industry. ( Think about how the Internet has already influenced trading.) "The technology is getting ahead of regulators" claims David Weissman, director of money and technology at Forrester Research Inc., in Cambridge, Mass. If one is to believe the quote above it sounds very ominous for the regulators and the government to attempt to even bring this media under any kind of regulation. But, what is it that they the government agencies truly are looking to regulate? If you take to the argument that this media, the Internet is truly a public access network, then the control to which they would like to extend to it would be the most regulated public access system in history. What I believe the attempt here is to regulate through censorship. Since it is almost impossible to censor the phone networks without actually eaves dropping on your phone, they have decided to regulate and censor your written word. The danger in this is what you write as an opinion may be construed by that government regulator as a violation of some regulatory act. The flip side to this is if you did this through another medium such as the phone system nothing would ever come it. The bigger question here is how much government do people want in there lives? The Internet was brought into the picture for the public as the next great technology of this century. It is without a doubt as big if not bigger than any other public means of communication that has come before it. With that in mind I think the government is trying to extract it's pound of flesh for what they believe is missed revenue dollars that could be made in the form of tax regulations. "There are probably insiders touting stocks on the Internet either anonymously or under assumed names," said Mary Schapiro, president of the National Association of Securities Dealers, which oversees the NASDAQ market. The argument that they are both (the government and NASDAQ) currently running with is the "protection of the investor". When one looks at NASDAQ's complaint it is fairly superficial, for them it is clearly a loss of income for their trading enviorment, for the government it is a loss of taxes that could be derived from those trades. Are we to believe that both of these agencies only have the best intentions at heart for those investors that might be duped. These issues have been around for along time through the use of other mediums like the phone system, direct mail marketing, "cold calling" from "boiler plate" houses, and even unscrupulous brokers who work in legimate brokerage houses. People today are still the victims of these types of scams through the use of the older technologies. So how is it that since the older scams are still being used is one to believe that they will have anymore success tackling the complex nature of the Internet and the myriad of scams that could generate from it. The success rate of convictions from past indiscretions is low at best, one only has to look at the mountain of arrests for "insider trading", that the government launched during the late 1970's through the middle 1980's to realize for all the hype of cleaning up Wall Street not a whole lot ever came from the scourging. What it seems to me is Ms. Shapiro would be better suited to try and align her NASDAQ forum with the Internet technology to take advantage of the technology rather than trying to use the government to bully people into being afraid to use the technology. Her second quote of "there is a tremendous amount of hype," comes off as nothing but sour grapes and a big opportunity to use her position to knock the Internet. If she honestly believes she's done everything to insure her customer base that her system of doing business is any bit as less likely to fall victim to insider trading and traders touting of stocks beyond what they should be touted as, she is sadly mistaken. The average investor is going to use every opportunity presented to them if they think it will give them the advantage in investing. Just look at places like Harry's at Hanover Square, a popular bar in the Wall Street area where depending on the afternoon one would only need walk around the bar to hear the broker types hyping their own stocks to each other and just about anyone in sight. Are they ready to regulate this very common practice done for the last 30 years, or how about the broker's who spend weekends on golf courses singing the praises of their stock to customers and brokers alike, who then come in on Monday and trade off the weekends knowledge or what they heard at the bar. How do they regulate this kind of "insider trading" activity, they have no way to help or protect the person who is not privy to these kind of conversations or dealings. The availability of the Internet to trade on this information to a larger market base I believe does even the playing for a lot of people who do invest. I don't believe that those who would use the Internet for financial information are that wild in their approach of their investing to fall for the super-hype of someone they don't know. For those do they would have fallen for it through any media out their because their approach is to win at all costs regardless whether it is legal or not. In closing, the argument presented by NASDAQ and the government is a weak one at best, I don't believe any government agency should be pressured into regulating any medium because of private industry's displeasure with that medium. Also regulations passed based on private industry demand usually leads to more problems than ever before. On only has to look at the great S&L bank failures that occurred after the government stepped in to help the S&L industry out. We will never know the true value of all the losses (in the S&L failures) derived from when government agencies answer the call with regulations to help out an industry that pushed for regulation to prop up a dying industry. The American people and the government should stand up and take notice of what the government tried to do in regulating banking in the 1980's, could very well be the debacle of the late 1990's with trying to regulate the Internet to save some parts of the Wall Street industry. Maybe this medium of the Internet will sound the death bells for some parts of a lot of industries, but I believe it is only the start of many great things to come for everyone involved who takes advantage of it. 2) Provide what regulations and guidelines, if any, you feel need to be implemented for this situation. Based on the preceding question any regulations passed to help the Wall Street industry I believe would create situations even more serious than the S&L failures, or the "insider trader" failures of the 1980's. You always run a fine line between what is a regulation for the good of the consumer and what are regulations designed to protect an industry. I believe there are enough regulations out there to protect the Wall Street industry as it presently exists, to have to conjure up regulations for every medium that could possibly come down the road to protect every industry or private citizens enviorment is just too much government agency in everyone's face. Not only will the federal government want their piece of the action, let's not forget cash strapped states like New York will also be looking for their's to. I will discuss this more in question #3. 3) Discuss ethics and surveillance concepts that pertain to this situation. The ethics problems I would like to discuss from both sides of the equation. From the standpoint of the trader the ethics problems are fairly obvious. He has to do his job within the present guidelines that regulate his/her profession. They are not to trade on information that has been illegally gotten through whatever means. This includes what I mentioned in Question #1 about information obtained through means that the general public is not privy to, all those bar dates and golf dates where the information about stocks is bantered about like idle gossip at a garden club party. This technically is considered "insider trading", but how does the government intend to alleviate this problem through any kind of surveillance, they can't. No more than they could alleviate the problem through the use of the phone network short of tapping their phones and monitoring their conversations. When does monitoring for wrong doing and the infringement of your Constitutional rights start to crossover. The danger here is obvious, for every regulation the government perceives as needed the American citizen gives up a little more of there right to privacy and free speech. For the trader types this comes in the form of what he says or writes about on a particular stock, he has to worry that it won't be construed as classified information that was some how derived from an illegal source. From the public side's responsibility and the perception they have to worry about is that what they traded cannot also look like they received some kind of special information that help them trade successfully and earn a profit. For both sides the questions of ethics in trading can only be answered by those that are involved. The majority of the industry does do everything above board, and I believe there are enough regulations and surveillance out there already to keep a fairly tight lid on all of those people who choose to be involved. Nothing is every 100%, but with is being done to police the industry is enough of a deterrent not to be persuaded to do the "insider trading" thing. You will always have those that will break the law for the pursuit of the dollars, some will even break the law for the thrill of getting over on the system, but for the vast majority this is not the means by which they invest, and with that they should not be penalized by overzealous government regulators and an industry looking to extract dollars out of a technology. You will never be able to stop the criminal types who will use the Internet for criminal advantage, anymore than you can stop all street crime. You cannot regulate the Internet for prevention of crime any better than you can regulate all people from doing criminal things, there is that small minority that will always continue to find the easy or criminal way around everything. To regulate the Internet to attempt to protect the public is just another form of censorship. The government would be riding a very fine line behind this concept of protection and the rights of the individual to express an opinion. If I publish on my Internet page that I made a great but of stock this week, that is my opinion and only my opinion. Should the government come along or any other private group come along and attempt to either sue me or censor me in some form or fashion just for my opinion. Should I worry that someone reading my page decides to act on what I wrote. If he/she does I would have to say that they are rather foolish to act on my opinion and invest their money. On the same token I would never react and invest on someone's say so without first thoroughly checking out all the facts. Do people go out and kill because they see a violent movie I don't think so! Then why would the government say they need to protect the public's interest by possibly watching my home page or anyone else's out there. Do they listen to your phone calls, do they read your mail, do they read your E-mail, do they tell you what books to read, what movies to see, then why would I want them surfing the Internet to under the guise of public protection. I'm an adult and would like to be treated as such, I can make correct decisions not only about how my money is spent but where. If I find something out on the Internet that I feel is so criminal I would alert people to the fact that whatever was out there to watch out for. You would be surprised how well Internet people do police the net and warn there friends and others about it. Don't buy into the government hype of public protection, for all the mediums I just listed above the scams as they are related to Wall Street are still happening big time, and they already regulate those technologies for our protection! It's not regulation they are looking to do, it is ownership of the Internet they are trying for, and with the help of big business, who sits there and cries foul, they may very well achieve this. Talk about two groups in need of finding some ethics big business and the government are sorely lacking. This the first major technology that has leveled the playing field for even the littlest user. Don't buy the hype where ever you can try to keep the regulators out, by voting, writing your congress, or whatever it takes legally. We are intelligent enough to make our own decisions!! 4) The year is 2016. Describe how information Technology and computerization have impacted individuals and Society in the past 20 years. Let's look at from an everyday perspective: First you'll be gently awaking by an alarm that you set by your voice the night before and playing what you want to hear again that decision was made the night before. You'll enter a kitchen where on voice command you order your cup of coffee and whatever breakfast you want because your computer run appliances will be able to do this for you. Next you go to your printer and get a copy of the newspaper you want to read because you will have programed to extract information from five or six different sources that you want your news from and it will be waiting for you to read. If you real lazy you could have your computer read it to you in a smooth digitized voice that you've selected. After finishing in your computerized bathroom that not only knows how hot you like your shower but also dispenses the right amount of toothpaste on to your tooth brush. After dressing from your computerized closet that selected all your clothes for the week, you'll enter your computerized car which is all activated by your voice. There also is a satellite guidance system for the times you might get lost but you've already programmed the car to know how to get you to work. Work will be only a three day a week affair with the other two days working out of your home. Your office will be totally voice activated. You'll run all of the programming you'll need for the day by voice activating the programming. You'll conference call to other office sites but it be in complete full motion video. The next step will be 3D holograms but that hasn't quite come to market yet. You'll instuct your computer by voice to take ant E-mail you need to send and it will be sent in real-time. The rest of the office also is capable of call forwarding you any phone calls or messages because the central computer in the office will know your where abouts in the office at any time as you pass through any door. Your day is over you'll leave instructions fro your computer to watch certain events throughout the night and if need be you could be reached at home. You'll be paid in credits to the credit cards of your choice, there will no longer be money exchanged. To help you protect against fraud on your cards when you spend money you'll use your thumb print as you would your signature now. At night you'll come to a far less stressed enviorment because the computer appliances in your house have taken a lot of the mundane jobs that you use to do away. You'll be able to enjoy high definition TV and be able to receive some 500 channels. After checking with your voice activated home computer to see if there is any phone messages or E-mail, you'll retire to bed of course in you climate controlled home that knows what settings you like in what parts of the house. Oh, yes you won't even have to tell your voice activated computer not to run your computerized sprinkler system for your lawn because it will have realized from the weather report that it will rain. 1) Identify and discuss security issues and considerations evident for Information Systems And computerization in the brokerage industry. ( Think about how the Internet has already influenced trading.) "The technology is getting ahead of regulators" claims David Weissman, director of money and technology at Forrester Research Inc., in Cambridge, Mass. If one is to believe the quote above it sounds very ominous for the regulators and the government to attempt to even bring this media under any kind of regulation. But, what is it that they the government agencies truly are looking to regulate? If you take to the argument that this media, the Internet is truly a public access network, then the control to which they would like to extend to it would be the most regulated public access system in history. What I believe the attempt here is to regulate through censorship. Since it is almost impossible to censor the phone networks without actually eaves dropping on your phone, they have decided to regulate and censor your written word. The danger in this is what you write as an opinion may be construed by that government regulator as a violation of some regulatory act. The flip side to this is if you did this through another medium such as the phone system nothing would ever come it. The bigger question here is how much government do people want in there lives? The Internet was brought into the picture for the public as the next great technology of this century. It is without a doubt as big if not bigger than any other public means of communication that has come before it. With that in mind I think the government is trying to extract it's pound of flesh for what they believe is missed revenue dollars that could be made in the form of tax regulations. "There are probably insiders touting stocks on the Internet either anonymously or under assumed names," said Mary Schapiro, president of the National Association of Securities Dealers, which oversees the NASDAQ market. The argument that they are both (the government and NASDAQ) currently running with is the "protection of the investor". When one looks at NASDAQ's complaint it is fairly superficial, for them it is clearly a loss of income for their trading enviorment, for the government it is a loss of taxes that could be derived from those trades. Are we to believe that both of these agencies only have the best intentions at heart for those investors that might be duped. These issues have been around for along time through the use of other mediums like the phone system, direct mail marketing, "cold calling" from "boiler plate" houses, and even unscrupulous brokers who work in legimate brokerage houses. People today are still the victims of these types of scams through the use of the older technologies. So how is it that since the older scams are still being used is one to believe that they will have anymore success tackling the complex nature of the Internet and the myriad of scams that could generate from it. The success rate of convictions from past indiscretions is low at best, one only has to look at the mountain of arrests for "insider trading", that the government launched during the late 1970's through the middle 1980's to realize for all the hype of cleaning up Wall Street not a whole lot ever came from the scourging. What it seems to me is Ms. Shapiro would be better suited to try and align her NASDAQ forum with the Internet technology to take advantage of the technology rather than trying to use the government to bully people into being afraid to use the technology. Her second quote of "there is a tremendous amount of hype," comes off as nothing but sour grapes and a big opportunity to use her position to knock the Internet. If she honestly believes she's done everything to insure her customer base that her system of doing business is any bit as less likely to fall victim to insider trading and traders touting of stocks beyond what they should be touted as, she is sadly mistaken. The average investor is going to use every opportunity presented to them if they think it will give them the advantage in investing. Just look at places like Harry's at Hanover Square, a popular bar in the Wall Street area where depending on the afternoon one would only need walk around the bar to hear the broker types hyping their own stocks to each other and just about anyone in sight. Are they ready to regulate this very common practice done for the last 30 years, or how about the broker's who spend weekends on golf courses singing the praises of their stock to customers and brokers alike, who then come in on Monday and trade off the weekends knowledge or what they heard at the bar. How do they regulate this kind of "insider trading" activity, they have no way to help or protect the person who is not privy to these kind of conversations or dealings. The availability of the Internet to trade on this information to a larger market base I believe does even the playing for a lot of people who do invest. I don't believe that those who would use the Internet for financial information are that wild in their approach of their investing to fall for the super-hype of someone they don't know. For those do they would have fallen for it through any media out their because their approach is to win at all costs regardless whether it is legal or not. In closing, the argument presented by NASDAQ and the government is a weak one at best, I don't believe any government agency should be pressured into regulating any medium because of private industry's displeasure with that medium. Also regulations passed based on private industry demand usually leads to more problems than ever before. On only has to look at the great S&L bank failures that occurred after the government stepped in to help the S&L industry out. We will never know the true value of all the losses (in the S&L failures) derived from when government agencies answer the call with regulations to help out an industry that pushed for regulation to prop up a dying industry. The American people and the government should stand up and take notice of what the government tried to do in regulating banking in the 1980's, could very well be the debacle of the late 1990's with trying to regulate the Internet to save some parts of the Wall Street industry. Maybe this medium of the Internet will sound the death bells for some parts of a lot of industries, but I believe it is only the start of many great things to come for everyone involved who takes advantage of it. 2) Provide what regulations and guidelines, if any, you feel need to be implemented for this situation. Based on the preceding question any regulations passed to help the Wall Street industry I believe would create situations even more serious than the S&L failures, or the "insider trader" failures of the 1980's. You always run a fine line between what is a regulation for the good of the consumer and what are regulations designed to protect an industry. I believe there are enough regulations out there to protect the Wall Street industry as it presently exists, to have to conjure up regulations for every medium that could possibly come down the road to protect every industry or private citizens enviorment is just too much government agency in everyone's face. Not only will the federal government want their piece of the action, let's not forget cash strapped states like New York will also be looking for their's to. I will discuss this more in question #3. 3) Discuss ethics and surveillance concepts that pertain to this situation. The ethics problems I would like to discuss from both sides of the equation. From the standpoint of the trader the ethics problems are fairly obvious. He has to do his job within the present guidelines that regulate his/her profession. They are not to trade on information that has been illegally gotten through whatever means. This includes what I mentioned in Question #1 about information obtained through means that the general public is not privy to, all those bar dates and golf dates where the information about stocks is bantered about like idle gossip at a garden club party. This technically is considered "insider trading", but how does the government intend to alleviate this problem through any kind of surveillance, they can't. No more than they could alleviate the problem through the use of the phone network short of tapping their phones and monitoring their conversations. When does monitoring for wrong doing and the infringement of your Constitutional rights start to crossover. The danger here is obvious, for every regulation the government perceives as needed the American citizen gives up a little more of there right to privacy and free speech. For the trader types this comes in the form of what he says or writes about on a particular stock, he has to worry that it won't be construed as classified information that was some how derived from an illegal source. From the public side's responsibility and the perception they have to worry about is that what they traded cannot also look like they received some kind of special information that help them trade successfully and earn a profit. For both sides the questions of ethics in trading can only be answered by those that are involved. The majority of the industry does do everything above board, and I believe there are enough regulations and surveillance out there already to keep a fairly tight lid on all of those people who choose to be involved. Nothing is every 100%, but with is being done to police the industry is enough of a deterrent not to be persuaded to do the "insider trading" thing. You will always have those that will break the law for the pursuit of the dollars, some will even break the law for the thrill of getting over on the system, but for the vast majority this is not the means by which they invest, and with that they should not be penalized by overzealous government regulators and an industry looking to extract dollars out of a technology. You will never be able to stop the criminal types who will use the Internet for criminal advantage, anymore than you can stop all street crime. You cannot regulate the Internet for prevention of crime any better than you can regulate all people from doing criminal things, there is that small minority that will always continue to find the easy or criminal way around everything. To regulate the Internet to attempt to protect the public is just another form of censorship. The government would be riding a very fine line behind this concept of protection and the rights of the individual to express an opinion. If I publish on my Internet page that I made a great but of stock this week, that is my opinion and only my opinion. Should the government come along or any other private group come along and attempt to either sue me or censor me in some form or fashion just for my opinion. Should I worry that someone reading my page decides to act on what I wrote. If he/she does I would have to say that they are rather foolish to act on my opinion and invest their money. On the same token I would never react and invest on someone's say so without first thoroughly checking out all the facts. Do people go out and kill because they see a violent movie I don't think so! Then why would the government say they need to protect the public's interest by possibly watching my home page or anyone else's out there. Do they listen to your phone calls, do they read your mail, do they read your E-mail, do they tell you what books to read, what movies to see, then why would I want them surfing the Internet to under the guise of public protection. I'm an adult and would like to be treated as such, I can make correct decisions not only about how my money is spent but where. If I find something out on the Internet that I feel is so criminal I would alert people to the fact that whatever was out there to watch out for. You would be surprised how well Internet people do police the net and warn there friends and others about it. Don't buy into the government hype of public protection, for all the mediums I just listed above the scams as they are related to Wall Street are still happening big time, and they already regulate those technologies for our protection! It's not regulation they are looking to do, it is ownership of the Internet they are trying for, and with the help of big business, who sits there and cries foul, they may very well achieve this. Talk about two groups in need of finding some ethics big business and the government are sorely lacking. This the first major technology that has leveled the playing field for even the littlest user. Don't buy the hype where ever you can try to keep the regulators out, by voting, writing your congress, or whatever it takes legally. We are intelligent enough to make our own decisions!! 4) The year is 2016. Describe how information Technology and computerization have impacted individuals and Society in the past 20 years. Let's look at from an everyday perspective: First you'll be gently awaking by an alarm that you set by your voice the night before and playing what you want to hear again that decision was made the night before. You'll enter a kitchen where on voice command you order your cup of coffee and whatever breakfast you wa f:\12000 essays\technology & computers (295)\Radar A Silent Eye in the Sky.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Radar: A Silent Eye in the Sky Daniel Brosk Period Two Today's society relies heavily on an invention taken for granted: radar. Just about everybody uses radar, whether they realize it or not. Tens of thousands of lives rely on the precision and speed of radar to guide their plane through the skies unscathed. Others just use it when they turn on the morning news to check the weather forecast. While radar seems to be an important part of our everyday lives, it has not been around for long. It was not put into effect until 1935, near World War II. The British and the Americans both worked on radar, but they did not work together to build a single system. They each developed their own systems at the same time. In 1935, the first radar systems are installed in Great Britain, called the Early Warning Detection system. In 1940, Great Britain and the United States install radar aboard fighter planes, giving them an advantage in plane-to-plane combat as well as air-to-ground attacks. Radar works on a relatively simple theory. It's one that everybody has experienced in their lifetime. Radar works much like an echo. In an echo, a sound is sent out in all directions. When the sound waves find an object, such as a cliff face, they will bounce back to the source of the echo. If you count the number of seconds from when the sound was made to when the sound was heard, you can figure out the distance the sound had to travel. The formula is: (S/2) X 1100 = D (Half of the total time times 1100 feet per second equals the distance from the origin to the reflection point) Of course, radar is a much more complicated system than just somebody shouting and listening for the echo. In fact, modern radar listens not only for an echo, but where the echo comes from, what direction the object is moving, its speed, and its distance. There are two types of modern radar: continuous wave radar, and pulse radar. Pulse radar works like an echo. The transmitter sends out short bursts of radio waves. It then shuts off, and the receiver listens for the echoes. Echoes from pulse radar can tell the distance and direction of the object creating the echo. This is the most common form of radar, and it is the one that is used the most in airports around the world today. Continuous wave radar works on a different theory, the Doppler Theory. The Doppler Theory works on the principle that when a radio wave of a set frequency hits a moving object, the frequency of the wave will change according to how the object is moving. If the object is moving toward the Doppler radar station, the object will reflect back a higher frequency wave, If it is moving away, the frequency of the wave will be lower. From the change in frequency, the speed of the target can This is the type of radar that is used to track storms, and the type of radar used by policemen in radar guns. These are the basics of radar. But, there is a lot of machinery and computer technology involved in making an accurate picture of what is in the sky, on the sea, or on the road. Most radar systems are a combination of seven components (See Appendix A). Each component is a critical part of the radar system. The oscillator creates the actual electric waves. It then sends the radio waves to the modulator. The modulator is a part of the timing system of a radar system. The modulator turns on and off the transmitter, creating the pulse radar effect. It tells the transmitter to send out a pulse, then wait for four milliseconds. The transmitter amplifies the low-power waves from the oscillator into high-power waves. These high-power waves usually last for one-millionth of a second. The antenna broadcasts the radar signals and then listens for the echoes. The duplexer is a device that permits the antenna to be both a sending device, and a receiving device. It routes the signal from the transmitter to the antenna, and then routes the echoes from the objects to the receiver. The receiver amplifies the weak signals reflected back to the antenna. It also filters out background noise that the antenna picks up, sending only the correct frequencies to the signal processor. The signal processor takes the signals from the receivers, and removes signals from stationary objects, such as trees, skyscrapers, or mountains. Today, this is mostly done by computers. And last, but not least, we come to the display screen. For many years, this was a modified TV tube with an electroluminescent coating, which lights up when hit by electrons, and retained the glow for a few seconds. This is what creates the "blips" on the radar screen, that flash about every ten seconds, then fade. In newer systems, the signal processor and the display screen are combined into a single computer. With the power of today's computers, this information is transmitted around the world, to other airports, to the government, and to TV stations, where weather broadcasts are made. Today, radar systems are standard around the country. The United States has the most sophisticated radar system, both on the ground and in the sky. On the ground, we track planes, weather, ships, and many Intercontinental Ballistic Missiles. From space, we use satellites with radar to map the globe, spy on foreign countries, and track over the oceans. In each instance, radar plays a key role in our day-to-day lives. Bibliography Hitzeroth, Deborah. Radar: The Silent Detector, 96 pp., ills., Lucent Books, 1990. Page, Irving H. "RADAR," The New Book of Popular Science, pgs. 246-253, Grolier Inc. 1994. f:\12000 essays\technology & computers (295)\regulating the internet whos in charge.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ James H Huff Huff1 English 111G Fall 1996 The internet was started by the military in the late forties, and has since grown to an incredibly large and complex web, which will no doubt effect all of us in the years to come. The press has recently taken it upon themselves to educate the public to the dark side of this web, a network which should be veiwed as a tremendous resource of information and entertainment. Instead, due to this negative image, more and more people are shying away from the internet, afraid of what they may find there. We must find a way to regulate what is there, protect ourselves from what is unregulatable, and educate the general populace on how to use this tremendous tool. ''The reality exists that governance of global networks offers major challenges to the user, providers, and policy makers to define their boundaries and their system of govenment" (Harassim, p84) The intemet is a group of networks, linked together, which is capable of transmitting vast amounts of information from one network to another. The internet knows no boundaries and is not located in any single country. The potential the internet has of shaping our world in the future is inconceivable. But with all its potential the internet is surrounded by questions of its usage. The intemet was named the global village by McLuhan and Fiore in 1968, but recently the internet has been more properly renamed the global metropolis. Robert Fortner defines the internet as a place where people from all different cultures and backgrounds come together to share ideas and information. " Communication in a metropolis also reflects the ethnic, racial, and sexual inequalities that exist generally in the society. '' (Fortner, p25) Huff2 When a person enters into a global metropolis to engage in communication they do not know who they will interact with nor do they know what information that they may come across. Which brings an important question to mind. If this is a community, a global metropolis, should it not be governed to protect the members of the community? But more importantly, can a community that knows no boundaries and belongs to no country, be regulated? And who can or should regulate it? With the vast amounts of information transmitted through network to network, with some information remaining at sites temporarily or disappearing within seconds, how can one regulate it? In a meeting of the Senate Select Committee on Community Standards in Australia, iiNet, an Australian intemet provider, presented facts on how much information passes through their server daily. ''Our own network sees over 200,000 items of email between individuals every day of the year, and this is increasing. In USENet news, the 'discussion areas', iiNet sees 150Mb of typed data every day, over 100,000 pages. This includes people chatting idly, informational postings, questions, answers and anything else that the committee can imagine people wishing to talk about.'' (Senate Committee). This is an example of one server, the information that passes through it originates from all over the world. The point is that this one provider can not possibly be able to review everything that passes through its server. Should the internet be regulated? We know that it can't and never will be perfectly regulated and therefore the user will always need to be aware that he is entering a global community and he may find some information offensive. Huff3 For example, one of the hottest issues which has been in the news is the internet transmitting pornography. Individuals and companies do upload and download pomography. It ranges from pictures of nude men and women to child pornography. Many schools have adopted the idea of bringing computers into the classrooms. "In the classroom, where youngsters are being introduced to the machines as early as kindergarten, they astound-and often outpace-their teachers with their computer skills." (Golden, 219) Educating students about computer literacy is an important aspect for the upcoming generation. Computer literacy will become just as important for people to understand as reading, writing and arithmetic are. With this increased ability at such a young age comes the the abilty to access the net, and the places on the net that we as parents don't want our children going. Much the same as the ability to walk enables them to go places they don't belong. The United States has laws which regulate pornography with a clear understanding of the First Amendment, allowance for freedom of speech. There is a difference between obscenity which is not protected by the First Amendment and indecency which is! The way the U.S. determines what is obscenity and what isn't is by using the Miller three part test to see if something is obscene or not. The test is listed here: 1. Would the average person, applying contemporary community standards' find that the work, taken as a whole, appeals to the prurient interest? 2. Does the work depict or describe, in a patently offensive way, sexual conduct specifically defined by the applicable state law? 3. Does the work, taken as a whole, lack serious literary, artistic, political, or scientific value? Huff4 As one might imagine it is complex enough trying to deem what is obscene and what isn't using this test. All three must be ''yes'' in order to deem something as obscene. Every state has different pomography laws based on this test because every state has different community standards. Yet we deal with a global metropolis, in which many people with different national standards exsist. ''National laws are just that, national in orientation and application. '' (Harissam p.923) If we are proposing regulating the internet to make it illegal to distribute and receive obscene material we need to find a law that the world could agree on. If the world accepted Miller's test of how to determine obscene material, what would be the standard needed in order to answer the first question? These are the questions that are facing the government, providers, and users. Many users are saying regulating the intemet is foolish and futile. A new act introduced in the senate, called the Exon/Gorton Communications Decency Act would give the government authority over what can and cannot be sent over the internet and many users are lobbying voters to write their senators and ask them not to vote for it, invoking the First Amendment. Is anyone regulating the net? The answer is yes, the providers and some universities are trying to regulate some things. Daniel C. Robbins, the author, artist, and producer of the bondage, domination, submission, sadism, and masochism web page, was told he would have to shut down his page by an administrator of Brown University because of the content of his page. The web page contained stories of married couples tying each other up to non-consensual rape, torture, and murder as well as pictures and an interactive virtual reality dungeon. (Robbins). Huff5 America Online (AOL) also has pulled people's posts because of their content. The reasoning is that these people have violated their Terms of Service agreement which they make when they sign onto AOL. The terms of sevice agreement for AOL states that members must restrain from using vulgarity and insulting language, and from talking explicitly about sex. Immediately people cry censorship and plead the First Amendment Rights! But in both cases, First Amendment Rights did not apply. AOL is a private provider and has a right to let who they want on the net and are breaking no laws for not allowing members to have complete freedom of speech. The University as well has the right to say what is received or sent on their server. The government has started to take a stronger position regarding the internet. Officials have investigated a few incidents concerning child pornography, and have begun to investigate more obscene material being sent over the net. Child pornography is defined as pictures or any visual form that show minors, under the age of 18, in a sexual way. The material does not need to be legally obscene in terms of the test stated above to deem it child pornography. All child pornography is illegal and does not enjoy First Amendment rights. Written marerial about children engaged in sexual acts does not apply to child pornography, because the marerial has to use real minors. Drawings also do not count as child pornography. It is easier to regulate against child pornography because, in the U.S., just having possession of it is illegal. Where-as a person can not be prosecuted for having other obscene material in his home, if child pornography is found the person will be prosecuted. If one is to upload child pornography, or obscene material for that matter, they can be charged with transporting obscene material across state lines for distribution, which is a crime. Officials, especially when it comes to comes to child pornography, are starting to take as strong of a stand as they can. Huff6 The only reason the government could respond the way it has is because they have been able to prosecute people in the U.S., mainly for downloading more than uploading child pornography, because it is such a strong law in the U.S. This has made some users concerned about whether they are involved in illegal activity. The authors of Cyberspace and the Law have made a flow chart to demonstrate what should and should not lead a user to legal problems. it points out even more ominous than pornography ; electronic fraud. "Computer crime can be enormously profitable." (Logsdon, 162) "The opportunities for creative fraud are vastly greater than they used to be."(Baig, Business week, Nov. 14, `94) Computer embezzlement can be very profitable with literally hundreds of thousand of dollars right at their fingertips. Many computer embezzlers are not caught, if they are, it is usually only by chance. Also those who embezzle and are caught usually "escape prosecution because the institutions they rob prefer to avoid the unfavorable publicity of a public trial." (Logsdon,164-5) The temptation is great for many who are computer geniuses."The average lifted in an embezzlement involving computers is $430,000-and it is not uncommon for the total to go considerably higher."(Logsdon, 163) This leads to the question of trust and privacy. New technologies are being developed to help protect citizens from fraud and give them a sense of privacy , but in the mean time consumers must remember the old adage: "If it sounds too good to be true, it is!" Huff7 There are still many flaws that need to be worked out with the new computer revolution. As someone had written in a usenet group on the Internet: "The ultimate authority of a claim to my identity is me and my credibility."(Internet source #1) It is still up to the individual whether or not to believe what has been said and by whom it was said. Can the net be regulated? What is it that we want the internet to be for us and our society? Is it safe to allow our children to play with a system that adults do not fully understand and are not sure how to control? These are not easy questions to answer. As the net grows, goverments will most certainly become more involved, and regulation will most certainly follow. Most importantly we as adults, parents and educators most find ways to teach our children how to use this powerful tool constructively. Granted, that's not easy in today's fast paced, two income, latch-key kids society, it is imperative that we find a way. Maybe the answer is to take an hour of television time, and devote it to computer literacy.(Then while we're at it let's take another hour and read a book!) If that's not possible, there are ways to block out certain sites, much the same as the V-chip used on televisions. These are readily available, many at no cost on the internet. This allows us, as users to regulate what enters our home. References Y1.Harissam, Linda Global Networks1993 Mass. Institute of Technology 2.Fortner, Robert International Communication 1993 Wadsworth, Inc. Belmont Calif 3.Senate Select Committee on Community Standards 4.Robbins, Daniel Documents on Bondage Web Page 5.Cavazos, Edward and Morin, Gavino Cyberspace and the Law 1994 Mass. Institute of Technology 6.Turner Research Committee 7.Broadcast and Cable Nov. 6 Vol. 125 Berniker, Mark Internet begins to cut into TV viewing p.113 8.USA Today Nov. 1 Linda Kanamine, Gamblers Stake out the Net cover story. f:\12000 essays\technology & computers (295)\Response to AOL contraversy essay.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The article "America Online, while you can" by Bob Woods is all about the hoopla concerning the fact that America Online, or AOL, has not been able to accommodate its vast amount of customers. This is due to AOL's new flat rate, which substituted their original hourly deal. Many AOL users experience busy signals when trying to log on. When and if they do get on AOL, the service runs extremely slow because of the overload of users. Woods threatens that AOL will lose many of their customers if they don't improve their resources. Other companies should beef-up their advertising and try to cash in by targeting the unsatisfied AOL users. In this day and age of internet use, people in any given location can choose from at least fifteen national companies, such as sprintlink, compuserve, ameritech, erols and so on. Using these services are less expensive than America Online. Per month for unlimited use they average at around $10 to $15 dollars as opposed to AOL's hefty $19.95 a month. AOLers are paying for the appealing menus, graphics and services AOL uses to drive their customers to the internet. These same features can be located anywhere else on the net with the aid of any search device, such as infoseek, yahoo, microsoft network or web- crawler. These sites are no harder to use and they provide lots of helpful menus and information. In Wood's article, he states that he lives in Chicago, and AOL has several different access numbers to try if one is busy. He writes that often when he has tried to log on using all of the available numbers, and has still been unsuccessful. This is a problem for him because he is dependent on AOL to "do the daily grind of (his) job as a reporter and PM managing editor." If I was not satisfied with the performance of my internet provider, which happens to be sprintlink, I would not complain to the company. I would take my money elsewhere, especially if my job depended on using the internet. With all of the other options available, wasted time and inevitable frustration using AOL could be eliminated. I live in Richmond, Va., which is a fairly big city and have not once been logged off or gotten a busy signal using sprintlink. And I only have one access line available with my provider as opposed to AOL's multiple lines. I agree with Woods in the fact that people will (in most circumstances) get better internet service and customer service with a local, smaller or more specified company. I think it is safe to say that America Online has done too little too late. In the internet business, or any commercial mega-cooperation, I believe that you shouldn't advertise and try to get more clients that you are prepared to handle. AOL most definitely should have put more thought into the response their extensive advertising campaigns were sure to bring. I think that eventually people will realize that many other options exist and break away from AOL and will find other providers. I think that Compuserve also thought this, by placing an ad during the Super Bowl stating "We have the best internet service, call 1-800- NOT-BUSY." America Online users have recently banned together and filed a class action suit about all this. I don't see that necessary because they could easily find a smaller, localized company that would be more than happy to help out with today's demand for internet service. I do not understand why the unsatisfied AOL customers have not already taken their business elsewhere. Well, I can't make decisions for other people, but this should have not been such a big deal. Throughout my life, I have found that if something is not working out for you, it is better to evaluate your other options and find something more advantageous to you than to complain to the source and ask them to do the changing. Basically, what I am saying is if you have a problem, fix it yourself and don't whine or cry to everyone else about your misfortunes. It would save a lot of time, trouble and controversy. f:\12000 essays\technology & computers (295)\review of online newspapers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Review of on-line publications Because of the prevalence of the internet in today's society many thousands of papers now publish an on-line edition. It is through the use of this medium that they wish to make in roads in the communications market. It is seen as a necessary step by many because of the loss of readership due to the internet, broadcast journalism and radio. In my review I will examine an on-line edition of a newspaper from each of the continents. I will comment on some of the technical aspects each employ. I will also discuss the tools many of the papers are using, that not only make them a hybrid of broadcast, radio and print journalism but also establish them as the only medium that is using interactive reporting. Representing the continent of Asia and the city of Hong Kong is the South China Morning Post. It is one of only a few English language news papers in the republic. The post has an air of journalistic freedom the other news papers do not seem to have. One of the lead articles outlined this concern and dispelled the rumor that China would censor the paper when the city is turned over in July. The Morning Post is a very up to date paper that features an updated breaking news sidebar. A very useful and inviting feature which enables it to keep up with and often scoop the broadcast media. The newspaper also had a technology section which caters to the on-line user. The post also utilizes the use of java script to make it seem more like an interactive medium. Of the papers available from the continent of India the Times of India is the one of the finest. It features an archive that is quite extensive, a metropolis section which features two cities a day, an easily accessible reprint section for syndication of articles and a career opportunities in India section that is aimed at the overseas applicant. It has some draw backs though, the world section although very extensive is more of an overview of the continent and the region rather than the entire world. It is not as inviting as some of the other on-line news papers it has a uninviting look to it which lends it to be a little less reader friendly. Business Day is the best offering of dailies for South Africa. It takes a little longer to load the page but that is due to a very dominate graphic which clearly outlines all of the major markets of the world. The major market graphic is the most impressive of the elements of this on-line newspaper. Coming in second is business days entertainment section. It has well written and intelligent reviews of cinema, books, theater, wine and food. As a whole the paper has some major drawbacks, there are no pictures, essentially no world news and the front page contained a spelling error. The Times of London is the signature paper of Europe. It is not only very easy to use it is also very insightful and timely. It has a crossword puzzle, a first among all of the newspapers that I reviewed for this assignment. It has a true world news section which has comprehensive coverage of the world. It has a very slick look that is complimented by many color and black and white pictures. Many of the stories incorporated graphics which lead to a very contemporary wed designed look. The most compelling aspect of the paper was the feature "personal times". Personal times, is a customized news paper designed from the readers general interests. The feature was one of many which distinguished the times from any other I reviewed. Overall the times was a very exciting newspaper and one which is very insightful into its readers needs. An extremely modern looking design dominates the Christchurch Press, one of the three on-line papers from the small island. It has the distinction of being the only paper I could find that had a photo gallery. The gallery was a very welcomed find amongst the many papers on the internet that all but abandon photojournalism for more graphics and text. Surprisingly, it does not have a world news a section. It has some unusual fare for an on-line news paper, such as, a motoring section, a teen section, a TV guide and a computer tools section. It also distinguishes itself from other on-line news papers by offering sound files. The sound files demonstrate how the medium of on-line news and broadcast will probably be a merging and gray area in the future. The Vancouver Sun is my installment for the North American continent. It is a very efficient news paper that maximizes all the space it uses and loads very rapidly. I has distinct departments such as: a net guide, a personal finance section and a trivia page. It is similar to other western papers in that it has an extensive world news section that is very comprehensive. In the world section is a scrolling synopsis of the days news that is very attractive to the on-line reader. Overall the Vancouver sun represents some of the finest North America has to offer in the form of the on-line newspaper. In my search of the news papers of South America I did not come across one that was printed in English or gave you the option of choosing one in English. I choose The Gazeta of Brazil to review because it seemed to be the most modern of the South American newspapers. although I could not read the features or the news articles it seemed to have an extensive listing of all categories as well as a large world news section. f:\12000 essays\technology & computers (295)\ROBOTICS.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The image usually thought of by the word robot is that of a mechanical being, somewhat human in shape. Common in science fiction, robots are generally depicted as working in the service of people, but often escaping the control of the people and doing them harm. The word robot comes from the Czech writer Karel Capek's 1921 play "R.U.R." (which stands for "Rossum's Universal Robots"), in which mechanical beings made to be slaves for humanity rebel and kill their creators. From this, the fictional image of robots is sometimes troubling, expressing the fears that people may have of a robotized world over which they cannot keep control. The history of real robots is rarely as dramatic, but where developments in robotics may lead is beyond our imagination. Robots exist today. They are used in a relatively small number of factories located in highly industrialized countries such as the United States, Germany, and Japan. Robots are also being used for scientific research, in military programs, and as educational tools, and they are being developed to aid people who have lost the use of their limbs. These devices, however, are for the most part quite different from the androids, or humanlike robots, and other robots of fiction. They rarely take human form, they perform only a limited number of set tasks, and they do not have minds of their own. In fact, it is often hard to distinguish between devices called robots and other modern automated systems. Although the term robot did not come into use until the 20th century, the idea of mechanical beings is much older. Ancient myths and tales talked about walking statues and other marvels in human and animal form. Such objects were products of the imagination and nothing more, but some of the mechanized figures also mentioned in early writings could well have been made. Such figures, called automatons, have long been popular. For several centuries, automatons were as close as people came to constructing true robots. European church towers provide fascinating examples of clockwork figures from medieval times, and automatons were also devised in China. By the 18th century, a number of extremely clever automatons became famous for a while. Swiss craftsman Pierre Jacquet-Droz, for example, built mechanical dolls that could draw a simple figure or play music on a miniature organ. Clockwork figures of this sort are rarely made any longer, but many of the so called robots built today for promotional or other purposes are still basically automatons. They may include technological advances such as radio control, but for the most part they can only perform a set routine of entertaining but otherwise useless actions. Modern robots used in workplaces arose more directly from the Industrial Revolution and the systems for mass production to which it led. As factories developed, more and more machine tools were built that could perform some simple, precise routine over and over again on an assembly line. The trend toward increasing automation of production processes proceeded through the development of machines that were more versatile and needed less tending. One basic principle involved in this development was what is known as feedback, in which part of a machine's output is used as input to the machine as well, so that it can make appropriate adjustments to changing operating conditions. The most important 20th-century development, for automation and for robots in particular, was the invention of the computer. When the transistor made tiny computers possible, they could be put in individual machine tools. Modern industrial robots arose from this linking of computer with machine. By means of a computer, a correctly designed machine tool can be programmed to perform more than one kind of task. If it is given a complex manipulator arm, its abilities can be enormously increased. The first such robot was designed by Victor Scheinman, a researcher at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology in Cambridge, Mass. It was followed in the mid-1970s by the production of so called programmable universal manipulators for assembly (PUMAs) by General Motors and then by other manufacturers in the United States. The nation that has used this new field most successfully, however, is Japan. It has done so by making robot manipulators without trying to duplicate all of the motions of which the human arm and hand are capable. The robots are also easily reprogrammed and this makes them more adaptable to changing tasks on an assembly line. The majority of the industrial robots in use in the world today are found in Japan. Except for firms that were designed from the start around robots, such as several of those in Japan, industrial robots are still only slowly being placed in production lines. Most of the robots in large automobile and airplane factories are used for welding, spray-painting, and other operations where humans would require expensive ventilating systems. The problem of workers being replaced by industrial robots is only part of the issue of automation as a whole, and individual robots on an assembly line are often regarded by workers in the familiar way that they think of their car. Current work on industrial robots is devoted to increasing their sensitivity to the work environment. Computer-linked television cameras serve as eyes, and pressure-sensitive skins are being developed for manipulator grippers. Many other kinds of sensors can also be placed on robots. Robots are also used in many ways in scientific research, particularly in the handling of radioactive or other hazardous materials. Many other highly automated systems are also often considered as robots. These include the probes that have landed on and tested the soils of the moon, Venus, and Mars, and the pilotless planes and guided missiles of the military. None of these robots look like the androids of fiction. Although it would be possible to construct a robot that was humanlike, true androids are still only a distant possibility. For example, even the apparently simple act of walking on two legs is very hard for computer-controlled mechanical systems to duplicate. In fact, the most stable walker made, is a six-legged system. A true android would also have to house or be linked to the computer-equivalent of a human brain. Despite some claims made for the future development of artificial intelligence, computers are likely to remain calculating machines without the ability to think or create for a long time. Research into developing mobile, autonomous robots is of great value. It advances robotics, aids the comparative study of mechanical and biological systems, and can be used for such purposes as devising robot aids for the handicapped. As for the thinking androids of the possible future, the well-known science-fiction writer Isaac Asimov has already laid down rules for their behavior. Asimov's first law is that robots may not harm humans either through action or inaction. The second is that they must obey humans except when the commands conflict with the first law. The third is that robots must protect themselves except, again, when this comes into conflict with the first law. Future androids might have their own opinions about these laws, but these issues must wait their time. Bibliography Buckley, Ruth V. "Robot." Grolier Electronic Publishing, Inc. 1993. Gibilisco, Stan. The McGraw-Hill Illustrated Encyclopedia of Robotics and Artificial Intelligence. McGraw-Hill, Inc. New York, 1994. Warring, R. H. Robots and Robotology. Tab Books Inc. Blue Ridge Summit, Pa. 1984. And various sites on the internet. f:\12000 essays\technology & computers (295)\Save the Internet!.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Did you know that 83.5% of the images available on the Internet were pornographic (Kershaw)? Did you know that pornography on the Internet is readily available to curious little children who happen to bump into them? Today, the Internet which has only become popular several years ago, is unequivocally one of the most revolutionary innovations in the computer world. The information superhighway has changed peoples' lives dramatically and have created many new exciting opportunities as well as markets to be exploited. But, unfortunately, the Internet also has created a haven for the depravity of pornography and hate literature. Therefore, this has called for immediate action and the only solution up to today is censorship. The Internet must be censored to the utmost. Many people complain that censorship is the violation of the first amendment and the suppression of freedom of speech but there is a point where freedom of speech becomes corrupt; freedom of speech only creates an excuse for the vile pornographers to poison our nation let alone our children. Pornography is regarded as immoral and downright filthy by the people. It denies human dignity and often stimulates the user to violent acts (Beahm 295). Therefore, pornography and violence are correlated. It trivializes the human beauty and converts it into commercialized slime (Beahm 295). Moreover, the consumption of pornography can lead to a detrimental addiction and the consumer can become a slave to it (Beahm 297). In short, pornography is a very addictive drug; which has an equal or more potency to hard-core drugs like heroin and cocaine. Can you imagine a ten year-old innocently surfing the Internet and suddenly bumps into a pornographic site depicting explicit images of naked women and becoming addicted to it? The damage is long-term and when the time comes, we will have a nation of perverts. Galbraith says, "The U.S. constitution does not forbid the protection of children from a pornographer's freedom of speech. That must be inferred through the First Amendment." These are our children and we have the right to protect them. The fact that pornography is damaging mentally is further aggravated as the availability of pornography to all Internet users is a major problem as well. The ridiculously easy accessibility to all types pornography; by anyone who logs into the Internet has raised a major concern from both the government and the public. The Internet, being the biggest interactive library ever existed, has no owner, President, chief operating officer or pope (Montoya). "Inevitably, being an uncontrolled system, means that the Internet will be subjected to subversive applications of some unscrupulous users." (Kershaw) Internet users can publish pornography and hate literature that information is literally made available to millions of Internet users worldwide (Kershaw). A five year-old can easily obtain pornography on the Internet by just typing the word "sex" in the search engine and literally hundreds of thousands of listing will appear on-screen, each leading to a smut page. This type of easy accessibility have people calling for censorship (Kershaw). "Most popular images available were of hardcore scenes featuring such acts as paedophilia, defection, bestiality and bondage." (Kershaw) According to Chidley, "In 1994, more than 450,000 pornographic images and text files were available to the Internet users around the world; that information had been accessed more than 6 million times." (58) This shocking figure is further agitated by the fact that pornography would be very harmful to the young unsuspecting child who happens to stumble on it while roaming about cyberspace (Kershaw). Remember, our children is our most important resource in the future; we have to refrain them from negative influences so that they could be good citizens of tomorrow. "Regulating the Internet might be the only way to protect Internet users including our children from accessing obscene pages." (Montoya) Singapore has taken an encouraging step to establish a "neighborhood police post" on the Internet to monitor and receive complaints of criminal activity-including the distribution of pornography (Chidley 58). They have also implemented proxy servers to partially filter our pornographic sites such as "Playboy" and "Penthouse" from access. An anonymous author quotes, "When such material is discovered, access providers could be alerted, and required to deny entry to the sites concerned." (Only) This is an ideal approach to censorship and should be exercised in every country. Parents at home can also be more responsible over what information is retrieved by their young ones by installing programs like SurfWatch that will block pornography from access (Quitter 45). In addition to this problem, child pornography also prevails over the Internet. Another distressing issue about the Internet is the presence of child pornography; "Digitally scanned images of ... naked boys and girls-populate cyberspace." (Chidley 58) Innocent-looking little boys and girls were forced to undress and they pictures are published on the Internet. How degrading of us as human beings! Furthermore, possession of child pornography is an offense and the "police are concerned that a shadowy pedophiles' ring, offering child pornography and information on where and how to indulge in their fetish, is operating on an international scale." (Chidley 58) By censoring the Internet, not only you'll keep the public save from the wickedness of pornography, but you'll also help enforcing the law. Pornography is not the only problem on the Internet; as there are many others; some of which I will describe next. Another issue that concerns me is that publications such as bomb making manuals are easily available online (Kershaw 2). According to Kershaw, "...the wrong people can now get their hands on this information without having to leave the secrecy of their home." (2) This easy availability of such material promotes terrorism-the information obtained to make the bomb found in Centennial park in Atlanta during the Olympics is available on the Internet. The bomb had created a big chaos but fortunately, there were no fatal casualties. However, not all terrorists' attempts were unsuccessful, thousands of innocent people and children have been killed in the Oklahoma bombing and the subway massacre in Tokyo. Moreover, many curious children have lost their fingers and even their lives by experimenting with bomb making. This must stop immediately! Another non-pornographic problem about the Internet is the availability of hate literature. The Internet has also been a place where people express their hatred and anger toward other people. Kershaw says, "...newsgroups on the Internet contain messages which could incite violence against members of various racial, ethnic or religious groups or messages which deny the Holocaust." This sort of information advocates racism and other types of sensitive discrimination. In many countries, the problem of racism is almost unheard of today, but the problem will surface up if we let the racists minorities influence public. Racism will then tear our nation apart and trigger many wars from trivial matters. Kershaw also says that groups such as the neo-Nazi of America are not uncommon and have many people worry that the Net gives these types of groups a meeting place and a source of empowerment (2). Kershaw also stresses, "One particularly disturbing message found on the Net one week after the Oklahoma bombing that read, 'I want to make bombs and kill evil Zionist people in the government. Teach me. Give me text files.'" The Internet is meant to be a medium that promotes healthy qualities; not a place of hate and evil. "There is a difference between free speech and teaching others how to kill." (Kershaw) Overall, the Internet has many useful applications which are educational and a fresh source of entertainment when television gets too boring. However, we shall not feel too complacent and ignore the deleterious face of the Internet. We will not rest on our laurels until the Internet is completely free from pornography and other unhealthy elements. Otherwise, the Internet will slowly but surely end up to be sleazy slums operated and dominated by notorious gangs and secret societies. While now it seems difficult to censor the Internet; however, we shall attempt our very best to do so to keep our children away from the dark side of the Internet; our children remains our highest priority. Let's attack this problem at its source by censoring the Internet as that is to only rational solution up to today. We do not want our world to be ravaged by the present situation of Internet! WORKS CITED Beahm, George. War of Words-The Censorship Debate. Kansas City : Andrew and McMeel, 1993. Chidley, Joe. "Red-Light District." Maclean's 22 May 1995. Galbraith, John Kenneth. "The Page That Formerly Occupied This Site Has Been Taken Down in Disgust!" http://user.holli.com/~kathh/anti.htm Kershaw, Dave. "Censorship and the Internet." http://cmns-web.comm.sfu.ca/cmns353/96-1/dkershaw 2 Apr. 1996 Montoya, Drake. "The Internet and Censorship." http://esoptron.umd.edu/FUSFOLDER/dmontoya.html 1995 "Only disconnect." The Economist 1 July 1995. Quittner, Joshua. "How Parents Can Filter Out the Naughty Bits." Time 13 July 1995. BIBLIOGRAPHY Beahm, George. War of Words-The Censorship Debate. Kansas City : Andrew and McMeel, 1993. Chidley, Joe. "Red-Light District." Maclean's 22 May 1995. Galbraith, John Kenneth. "The Page That Formerly Occupied This Site Has Been Taken Down in Disgust!" http://user.holli.com/~kathh/anti.htm Jensen, Carl. Censored: The News That Didn't Make the News-AND WHY. New York : Four Walls Eight Windows, 1994. Kershaw, Dave. "Censorship and the Internet." http://cmns-web.comm.sfu.ca/cmns353/96-1/dkershaw 2 Apr. 1996 Montoya, Drake. "The Internet and Censorship." http://esoptron.umd.edu/FUSFOLDER/dmontoya.html 1995 "Only disconnect." The Economist 1 July 1995. "Pulling the Plug on Porn." Time 8 January 1996. Quittner, Joshua. "How Parents Can Filter Out the Naughty Bits." Time 13 July 1995. f:\12000 essays\technology & computers (295)\Secret Addiction.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Secret Addiction Addictions are present in almost all of us. Whether it's a chemical dependence, or simply talking on the phone, they can start to control your life before you can even realize what is happening. Just a couple of years ago, I had a problem peeling myself away from a certain activity. As odd as it may sound, my own computer dominated my life. I remember the situation quite clearly. In the typical day, I would return home from school at around 3:00 or so. You see, most kids would set their backpack down in their bedroom, head to the kitchen and grab a snack. On the other hand, I was different. The second I walked through the door, I would immediately throw my backpack on the floor, quickly open the refrigerator and grab whatever food item was in sight, and would then proceed to dart up the stairs to the computer chair. Upon start-up of the computer, a warm and pleasant feeling would vibrate through my entire body, straight down my spine. It almost felt as if I was in some sort of heaven. Every keystroke of the keyboard sent a refreshing burst of pleasure in each of my finger tips. The glowing monitor emitted delightful rays that pleased and calmed my eyes. Oh yes, it was great to be home. Of course, this does not even compare to the long and never-ending hours I would spend on this machine. Although my bedtime was supposed to be around 10 or 11 in the evening, I would manage to stay up on this computer until sometimes as late as 3 in the morning, and this was on school nights as well. I never really realized how serious this was, until one day my best friend Trevor called me up on the phone. He told me about a hot new movie that was out, and I found myself making an excuse as to why I could not make it. But of course, the real reason was because I had work to do on the computer. However, the only work I had to do was to play that incredible new video game that was just released. It wasn't until that situation where I finally woke up out of my trance, and discovered something that completely blew me away -- I really did not have a life. Every time I went on the computer after that occurrence, a feeling of guilt swept through my body. What was it about this machine that forces me to stay on it for so long? After long hours of thought on the matter, I came to a some-what logical conclusion: it has the power to hypnotize me. After these and other events, I successfully was able to limit myself at the number of hours I was involved in it. I found myself doing more activities with my friends and family, eating a regular diet, and even sleeping. To this day, I am still in disbelief on how many hours I really did spend on that thing, but one thing is for sure, my addiction has vanished. f:\12000 essays\technology & computers (295)\Smart Cars.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Answer A : The TravTek navigationsystem is installed in 100 Oldsmobile Toronados, the visual part of the system is a computer monitor. Through detailed colour maps, it leads the driver through the town. The map changes all the time, cause a computer connected to a navigation-satellite, and with a magnetic compass installed, calculates the fastest or easiest way to your destination. When yellow circles appear in a particular place on the screen, it means that there is traffic jam here, or there has been an accident on the spot. The computer receives this information from the Traffic Management Centre, and it quickly points an alternative route out. b: The driver interact with the system through the so called "touch screen". 7000 buisnesses in the area are already listed in the computer, and you can point out your destination by searching through a lot of menus until you find it, or simply by typing the name of the street. when the place you want to go are registered you push the make destination button, and the computer programmes a route, the second after the route appears on the screen, while a voice explains it to you through the loudspeaker. c: The TravTek guides the driver through the traffic. The computer always knows where you are, and the navigation system makes it impossible to get lost in the traffic, unless you really want to, and deliberately make the wrong turns. It also guides you past traffic jams and problems who might crop up around an accident. In a town where you have never been, you will quickly be able to find your way to hotels, restaurants, sports arenas, shops and much more, just by looking through the various menus of the TravTek. d: The text definitely prefers the accuracy of the computer to the insecurity and misunderstandings who occur between two persons. The passage from line 54 and down clearly shows the point of view (quote): "...a guy on the gas station who, asked for directions, drawls: "Bee Line Expressway? Ummmm. I think you go up here about four miles and take a right, or maybe a left..."" The guy at the gas station are described as the incompetent fool who actually have no idea where he is himself... and his guidelines, insecure as they are already, will probably also be very hard to remember because of Ummmm, I think, maybe and or... Answer B: Japanese drivers can now find their way almost blindly, if they equip their cars with a digital map, who shows the position of the car. Based on the position of satellites, the position of the car is calculated by a small computer in the receiver. The receiving set in the car is attached to a screen on the dashboard. The screen can show a map of the entire Japan. The maps are delivered on four laserdiscs, each showing a part of Japan. All road maps are in colours, and they do not only show the network of roads. Restaurants and hotels are plot in as well. A small shining dot shows the cars position on the map. Answer C: Essay Smart-car Technology In Denmark The smart-car system is developed in USA and Japan. The system makes it almost impossible to get lost, when you are travelling by car . One big question for countries all over the rest of the world are: Will this kind of technology match our needs too? Are we able to use this in action. It sounds great, but will it give enough advantages compared to the price, and compared to other possibilities of solving our traffical problems. This question will of course also arise in Denmark. And what can this technology offer to improve in our traffical situation. In America and Japan it is made to take care of: Problems which appear when people have to find their way. Traffic jams, by leading the cars on alternative routes Problems which appear in connection with accidents The three problems are very big in these large countries. In the big cities with a population of several millions, it s very easy to get lost, also if you have lived there for a long time. The city itself and the complicated network of roads changes all the time, new buildings sprout up every day. The system who can keep up with this development, is clearly an advantage. But what about Denmark? The road network in Denmark can not be compared to the American freeways at all. Even I would be able to find my way from town to town, cause there are usually not so many possibilities. We do not have giant cities, Copenhagen is the largest and I admit that, it can be big enough! especially to Jutlanders as my self... But in the public transportation - network in Denmark is very adequate. No matter where in the country, or in the cities for that matter, you are going, you will be able to find a bus or a train to exactly that place. If we used the great advantage we have got in this, it would also take care of a lot of other problems. The main problem by driving your own car is: Parking. No matter how many new car-parks build, a parking spot never is to be found. Build in a device who could look up parking spots, and some people might see an advantage in it. Basically I still see the public transportation as a much better solution though. Concerning the accidents and traffic jams I see no problems in American scale either. There are accidents and traffic jams on the Danish roads too, but the queues coming with it are usually not so long that a detour pays. In Denmark we also tries to create other ways to guide the drivers. An example of this are changeable signs. On the express- and highways, for example the ones leading to Aalborg, there are signs connected to the Traffic Information Centre, they show how much waiting time there are if you choose to us the tunnel or the bridge, which roads that are cut off, in which car-parks there are vacant spots for parking...and so on - sort of a common TravTek. Another disadvantage that comes with the TravTek, that is not mentioned in the text, is that the sattelite reciever and the computer needs a lot of space in the car. I have seen one, and it occupies most of the trunk in an ordinary car. Bottom line: I consider the smart-car a great development for nations such as USA, they will have great advantages of them, but for smaller countries as Denmark I find that there are other things who offers a better solution. f:\12000 essays\technology & computers (295)\SMART HOUSE.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Some people think that it is difficult to find a relationship between home and computer. Usually people think that computer just using in a company and office. It is a misleading concept as we have a SMART HOUSE. The complete SMART HOUSE System has been available since early 1993. In a SMART HOUSE, people build a relationship between computer and home. The SMART HOUSE is a home management system that allows home owners to easily manage their daily lives by providing for a lifestyle that brings together security, energy management, entertainment, communications, and lighting features. So, the SMART HOUSE system is designed to be installed in a new house. Moreover, the system can be installed in a home undergoing reconstruction where walls have been completely exposed. The SMART HOUSE Consortium is investigating a number of different option to more easily install the SMART HOUSE system in an existing home. Moreover, the SMART HOUSE system has been packaged to satisfy any home buyer's needs and budget. The system appeals to a broad segment of new home buyers because of the diverse features and benefits it offers. These segments includes professionals, baby boomers in the move up markets, empty nesters, young middle-class, two - income families, the aging, and all who are energy conscious and technologically astute. Therefore, the SMART HOUSE system is suitable to install in new homes. Firstly, more saving can be gained when the SMART HOUSE System offers several energy management options that have the potential to reduce a home owner's utility bill by 30% or more per year depending on the options installed. For examples, a smart house can turn lights on and off automatically, it can help save on your electric bill. Moreover, the heating and air conditioning can be more efficiently controlled by a computer, saving tremendously on the cost of maintaining a consistent temperature within a large house. The exact level of savings will pay vary by house due to local utility rate structures, size of home, insulation, lifestyle, etc. Secondly, it is an easily operating system. Home owners can control their SMART HOUSE System using a menu driven control panel, touch-tone phone, personal computer, remote control or programmable wall switch. All SMART HOUSE controls are designed to be simple and easy to use. Because smart houses are independence, they can help people with disabilities maintain an active life. A smart house system can make such tasks easier by automating them. Lights and appliances can be turned on automatically without the user having to do it manually. For people with short term memory problem, a smart house can remind them to turn off the stove or even turn the stove off by itself. The SMART HOUSE System is initially programmed by a trained technician who configures the system using electronic tools designed to guide the technical through the necessary steps of System programming. These tools use a menu driven format to prompt the technician for the appropriate inputs to customize the System to meet a specific buyer's needs. Then, the home owner can create some house modes that are preprogrammed settings that allow home owners to activate a sequence of events with a single action. House modes can be named to represent general activity patterns common to most homes -- Awake, Asleep, Unoccupied, Vacation, etc. All can be programmed and changed to meet a home owner's needs. An example of a house mode is an AWAKE mode which can be programmed in the morning to do such things as: turn up the heat, turn up the water heater, change the security system settings, turn on the lights start the coffee and turn on the TV, etc. Thirdly, in a power outage, home owners will not able to use their system, which is the case with all electrical products, simply because electrical power is required in order for the SMART HOUSE system to operate. However, the system controller will re-boot itself when the power comes back on and the system's programming will be maintained. When the system fails, the home owners will be able to manually operate their home's products and appliances. The SMART HOUSE System is specifically designed so that if the system fails, the house still provides, at a minimum, all of the functionality provided by a conventionally wired home. For example, outlets will revert to what is called " Local control " so that they still provide power to anything plugged into them. In conclusion, SMART HOUSE System will be the new trend of the home construction in the following decades. It will make closer the relationship between computer and people. It seems to be supported by some people who believe in environment protection because it can reduce the waste in utility and save more money for people. It also saves times for people by the centralized system that can be controlled easily. 1 Some people think that it is difficult to find a relationship between home and computer. Usually people think that computer just using in a company and office. It is a misleading concept as we have a SMART HOUSE. The complete SMART HOUSE System has been available since early 1993. In a SMART HOUSE, people build a relationship between computer and home. The SMART HOUSE is a home management system that allows home owners to easily manage their daily lives by providing for a lifestyle that brings together security, energy management, entertainment, communications, and lighting features. So, the SMART HOUSE system is designed to be installed in a new house. Moreover, the system can be installed in a home undergoing reconstruction where walls have been completely exposed. The SMART HOUSE Consortium is investigating a number of different option to more easily install the SMART HOUSE system in an existing home. Moreover, the SMART HOUSE system has been packaged to satisfy any home buyer's needs and budget. The system appeals to a broad segment of new home buyers because of the diverse features and benefits it offers. These segments includes professionals, baby boomers in the move up markets, empty nesters, young middle-class, two - income families, the aging, and all who are energy conscious and technologically astute. Therefore, the SMART HOUSE system is suitable to install in new homes. Firstly, more saving can be gained when the SMART HOUSE System offers several energy management options that have the potential to reduce a home owner's utility bill by 30% or more per year depending on the options installed. For examples, a smart house can turn lights on and off automatically, it can help save on your electric bill. Moreover, the heating and air conditioning can be more efficiently controlled by a computer, saving tremendously on the cost of maintaining a consistent temperature within a large house. The exact level of savings will pay vary by house due to local utility rate structures, size of home, insulation, lifestyle, etc. Secondly, it is an easily operating system. Home owners can control their SMART HOUSE System using a menu driven control panel, touch-tone phone, personal computer, remote control or programmable wall switch. All SMART HOUSE controls are designed to be simple and easy to use. Because smart houses are independence, they can help people with disabilities maintain an active life. A smart house system can make such tasks easier by automating them. Lights and appliances can be turned on automatically without the user having to do it manually. For people with short term memory problem, a smart house can remind them to turn off the stove or even turn the stove off by itself. The SMART HOUSE System is initially programmed by a trained technician who configures the system using electronic tools designed to guide the technical through the necessary steps of System programming. These tools use a menu driven format to prompt the technician for the appropriate inputs to customize the System to meet a specific buyer's needs. Then, the home owner can create some house modes that are preprogrammed settings that allow home owners to activate a sequence of events with a single action. House modes can be named to represent general activity patterns common to most homes -- Awake, Asleep, Unoccupied, Vacation, etc. All can be programmed and changed to meet a home owner's needs. An example of a house mode is an AWAKE mode which can be programmed in the morning to do such things as: turn up the heat, turn up the water heater, change the security system settings, turn on the lights start the coffee and turn on the TV, etc. Thirdly, in a power outage, home owners will not able to use their system, which is the case with all electrical products, simply because electrical power is required in order for the SMART HOUSE system to operate. However, the system controller will re-boot itself when the power comes back on and the system's programming will be maintained. When the system fails, the home owners will be able to manually operate their home's products and appliances. The SMART HOUSE System is specifically designed so that if the system fails, the house still provides, at a minimum, all of the functionality provided by a conventionally wired home. For example, outlets will revert to what is called " Local control " so that they still provide power to anything plugged into them. In conclusion, SMART HOUSE System will be the new trend of the home construction in the following decades. It will make closer the relationship between computer and people. It seems to be supported by some people who believe in environment protection because it can reduce the waste in utility and save more money for people. It also saves times for people by the centralized system that can be controlled easily. 1 Some people think that it is difficult to find a relationship between home and computer. Usually people think that computer just using in a company and office. It is a misleading concept as we have a SMART HOUSE. The complete SMART HOUSE System has been available since early 1993. In a SMART HOUSE, people build a relationship between computer and home. The SMART HOUSE is a home management system that allows home owners to easily manage their daily lives by providing for a lifestyle that brings together security, energy management, entertainment, communications, and lighting features. So, the SMART HOUSE system is designed to be installed in a new house. Moreover, the system can be installed in a home undergoing reconstruction where walls have been completely exposed. The SMART HOUSE Consortium is investigating a number of different option to more easily install the SMART HOUSE system in an existing home. Moreover, the SMART HOUSE system has been packaged to satisfy any home buyer's needs and budget. The system appeals to a broad segment of new home buyers because of the diverse features and benefits it offers. These segments includes professionals, baby boomers in the move up markets, empty nesters, young middle-class, two - income families, the aging, and all who are energy conscious and technologically astute. Therefore, the SMART HOUSE system is suitable to install in new homes. Firstly, more saving can be gained when the SMART HOUSE System offers several energy management options that have the potential to reduce a home owner's utility bill by 30% or more per year depending on the options installed. For examples, a smart house can turn lights on and off automatically, it can help save on your electric bill. Moreover, the heating and air conditioning can be more efficiently controlled by a computer, saving tremendously on the cost of maintaining a consistent temperature within a large house. The exact level of savings will pay vary by house due to local utility rate structures, size of home, insulation, lifestyle, etc. Secondly, it is an easily operating system. Home owners can control their SMART HOUSE System using a menu driven control panel, touch-tone phone, personal computer, remote control or programmable wall switch. All SMART HOUSE controls are designed to be simple and easy to use. Because smart houses are independence, they can help people with disabilities maintain an active life. A smart house system can make such tasks easier by automating them. Lights and appliances can be turned on automatically without the user having to do it manually. For people with short term memory problem, a smart house can remind them to turn off the stove or even turn the stove off by itself. The SMART HOUSE System is initially programmed by a trained technician who configures the system using electronic tools designed to guide the technical through the necessary steps of System programming. These tools use a menu driven format to prompt the technician for the appropriate inputs to customize the System to meet a specific buyer's needs. Then, the home owner can create some house modes that are preprogrammed settings that allow home owners to activate a sequence of events with a single action. House modes can be named to represent general activity patterns common to most homes -- Awake, Asleep, Unoccupied, Vacation, etc. All can be programmed and changed to meet a home owner's needs. An example of a house mode is an AWAKE mode which can be programmed in the morning to do such things as: turn up the heat, turn up the water heater, change the security system settings, turn on the lights start the coffee and turn on the TV, etc. Thirdly, in a power outage, home owners will not able to use their system, which is the case with all electrical products, simply because electrical power is required in order for the SMART HOUSE system to operate. However, the system controller will re-boot itself when the power comes back on and the system's programming will be maintained. When the system fails, the home owners will be able to manually operate their home's products and appliances. The SMART HOUSE System is specifically designed so that if the system fails, the house still provides, at a minimum, all of the functionality provided by a conventionally wired home. For example, outlets will revert to what is called " Local control " so that they still provide power to anything plugged into them. In conclusion, SMART HOUSE System will be the new trend of the home construction in the following decades. It will make closer the relationship between computer and people. It seems to be supported by some people who believe in environment protection because it can reduce the waste in utility and save more money for people. It also saves times for people by the centralized system that can be controlled easily. 1 Some people think that it is difficult to find a relationship between home and computer. Usually people think that computer just using in a company and office. It is a misleading concept as we have a SMART HOUSE. The complete SMART HOUSE System has been available since early 1993. In a SMART HOUSE, people build a relationship between computer and home. The SMART HOUSE is a home management system that allows home owners to easily manage their daily lives by providing for a lifestyle that brings together security, energy management, entertainment, communications, and lighting features. So, the SMART HOUSE system is designed to be installed in a new house. Moreover, the system can be installed in a home undergoing reconstruction where walls have been completely exposed. The SMART HOUSE Consortium is investigating a number of different option to more easily install the SMART HOUSE system in an existing home. Moreover, the SMART HOUSE system has been packaged to satisfy any home buyer's needs and budget. The system appeals to a broad segment of new home buyers because of the diverse features and benefits it offers. These segments includes professionals, baby boomers in the move up markets, empty nesters, young middle-class, two - income families, the aging, and all who are energy conscious and technologically astute. Therefore, the SMART HOUSE system is suitable to install in new homes. Firstly, more saving can be gained when the SMART HOUSE System offers several energy management options that have the potential to reduce a home owner's utility bill by 30% or more per year depending on the options installed. For examples, a smart house can turn lights on and off automatically, it can help save on your electric bill. Moreover, the heating and air conditioning can be more efficiently controlled by a computer, saving tremendously on the cost of maintaining a consistent temperature within a large house. The exact level of savings will pay vary by house due to local utility rate structures, size of home, insulation, lifestyle, etc. Secondly, it is an easily operating system. Home owners can control their SMART HOUSE System using a menu driven control panel, touch-tone phone, personal computer, remote control or programmable wall switch. All SMART HOUSE controls are designed to be simple and easy to use. Because smart houses are independence, they can help people with disabilities maintain an active life. A smart house system can make such tasks easier by automating them. Lights and appliances can be turned on automatically without the user having to do it manually. For people with short term memory problem, a smart house can remind them to turn off the stove or even turn the stove off by itself. The SMART HOUSE System is initially programmed by a trained technician who configures the system using electronic tools designed to guide the technical through the necessary steps of System programming. These tools use a menu driven format to prompt the technician for the appropriate inputs to customize the System to meet a specific buyer's needs. Then, the home owner can create some house modes that are preprogrammed settings that allow home owners to activate a sequence of events with a single action. House modes can be named to represent general activity patterns common to most homes -- Awake, Asleep, Unoccupied, Vacation, etc. All can be programmed and changed to meet a home owner's needs. An example of a house mode is an AWAKE mode which can be programmed in the morning to do such things as: turn up the heat, turn up the water heater, change the security system settings, turn on the lights start the coffee and turn on the TV, etc. Thirdly, in a power outage, home owners will not able to use their system, which is the case with all electrical products, simply because electrical power is required in order for the SMART HOUSE system to operate. However, the system controller will re-boot itself when the power comes back on and the system's programming will be maintained. When the system fails, the home owners will be able to manually operate their home's products and appliances. The SMART HOUSE System is specifically designed so that if the system fails, the house still provides, at a minimum, all of the functionality provided by a conventionally wired home. For example, outlets will revert to what is called " Local control " so that they still provide power to anything plugged into them. In conclusion, SMART HOUSE System will be the new trend of the home construction in the following decades. It will make closer the relationship between computer and people. It seems to be supported by some people who believe in environment protection because it can reduce the waste in utility and save more money for people. It also saves times for people by the centralized system that can be controlled easily. 1 Some people think that it is difficult to find a relationship between home and computer. Usually people think that computer just using in a company and office. It is a misleading concept as we have a SMART HOUSE. The complete SMART HOUSE System has been available since early 1993. In a SMART HOUSE, people build a relationship between computer and home. The SMART HOUSE is a home management system that allows home owners to easily manage their daily lives by providing for a lifestyle that brings together security, energy management, entertainment, communications, and lighting features. So, the SMART HOUSE system is designed to be installed in a new house. Moreover, the system can be installed in a home undergoing reconstruction where walls have been completely exposed. The SMART HOUSE Consortium is investigating a number of different option to more easily install the SMART HOUSE system in an existing home. Moreover, the SMART HOUSE system has been packaged to satisfy any home buyer's needs and budget. The system appeals to a broad segment of new home buyers because of the diverse features and benefits it offers. These segments includes professionals, baby boomers in the move up markets, empty nesters, young middle-class, two - income families, the aging, and all who are energy conscious and technologically astute. Therefore, the SMART HOUSE system is suitable to install in new homes. Firstly, more saving can be gained when the SMART HOUSE System offers several energy management options that have the potential to reduce a home owner's utility bill by 30% or more per year depending on the options installed. For examples, a smart house can turn lights on and off automatically, it can help save on your electric bill. Moreover, the heating and air conditioning can be more efficiently controlled by a computer, saving tremendously on the cost of maintaining a consistent temperature within a large house. The exact level of savings will pay vary by house due to local utility rate structures, size of home, insulation, lifestyle, etc. Secondly, it is an easily operating system. Home owners can control their SMART HOUSE System using a menu driven control panel, touch-tone phone, personal computer, remote control or programmable wall switch. All SMART HOUSE controls are designed to be simple and easy to use. Because smart houses are independence, they can help people with disabilities maintain an active life. A smart house system can make such tasks easier by automating them. Lights and appliances can be turned on automatically without the user having to do it manually. For people with short term memory problem, a smart house can remind them to turn off the stove or even turn the stove off by itself. The SMART HOUSE System is initially programmed by a trained technician who configures the system using electronic tools designed to guide the technical through the necessary steps of System programming. These tools use a menu driven format to prompt the technician for the appropriate inputs to customize the System to meet a specific buyer's needs. Then, the home owner can create some house modes that are preprogrammed settings that allow home owners to activate a sequence of events with a single action. House modes can be named to represent general activity patterns common to most homes -- Awake, Asleep, Unoccupied, Vacation, etc. All can be programmed and changed to meet a home owner's needs. An example of a house mode is an AWAKE mode which can be programmed in the morning to do such things as: turn up the heat, turn up the water heater, change the security system settings, turn on the lights start the coffee and turn on the TV, etc. Thirdly, in a power outage, home owners will not able to use their system, which is the case with all electrical products, simply because electrical power is required in order for the SMART HOUSE system to operate. However, the system controller will re-boot itself when the power comes back on and the system's programming will be maintained. When the system fails, the home owners will be able to manually operate their home's products and appliances. The SMART HOUSE System is specifically designed so that if the system fails, the house still provides, at a minimum, all of the functionality provided by a conventionally wired home. For example, outlets will revert to what is called " Local control " so that they still provide power to anything plugged into them. In conclusion, SMART HOUSE System will be the new trend of the home construction in the following decades. It will make closer the relationship between computer and people. It seems to be supported by some people who believe in environment protection because it can reduce the waste in utility and save more money for people. It also saves times for people by the centralized system that can be controlled easily. 1 Some people think that it is difficult to find a relationship between home and computer. Usually people think that computer just using in a company and office. It is a misleading concept as we have a SMART HOUSE. The complete SMART HOUSE System has been available since early 1993. In a SMART HOUSE, people build a relationship between computer and home. The SMART HOUSE is a home management system that allows home owners to easily manage their daily lives by providing for a lifestyle that brings together security, energy management, entertainment, communications, and lighting features. So, the SMART HOUSE system is designed to be installed in a new house. Moreover, the system can be installed in a home undergoing reconstruction where walls have been completely exposed. The SMART HOUSE Consortium is investigating a number of different option to more easily install the SMART HOUSE system in an existing home. Moreover, the SMART HOUSE system has been packaged to satisfy any home buyer's needs and budget. The system appeals to a broad segment of new home buyers because of the diverse features and benefits it offers. These segments includes professionals, baby boomers in the move up markets, empty nesters, young middle-class, two - income families, the aging, and all who are energy conscious and technologically astute. Therefore, the SMART HOUSE system is suitable to install in new homes. Firstly, more saving can be gained when the SMART HOUSE System offers several energy management options that have the potential to reduce a home owner's utility bill by 30% or more per year depending on the options installed. For examples, a smart house can turn lights on and off automatically, it can help save on your electric bill. Moreover, the heating and air conditioning can be more efficiently controlled by a computer, saving tremendously on the cost of maintaining a consistent temperature within a large house. The exact level of savings will pay vary by house due to local utility rate structures, size of home, insulation, lifestyle, etc. Secondly, it is an easily operating system. Home owners can control their SMART HOUSE System using a menu driven control panel, touch-tone phone, personal computer, remote control or programmable wall switch. All SMART HOUSE controls are designed to be simple and easy to use. Because smart houses are independence, they can help people with disabilities maintain an active life. A smart house system can make such tasks easier by automating them. Lights and appliances can be turned on automatically without the user having to do it manually. For people with short term memory problem, a smart house can remind them to turn off the stove or even turn the stove off by itself. The SMART HOUSE System is initially programmed by a trained technician who configures the system using electronic tools designed to guide the technical through the necessary steps of System programming. These tools use a menu driven format to prompt the technician for the appropriate inputs to customize the System to meet a specific buyer's needs. Then, the home owner can create some house modes that are preprogrammed settings that allow home owners to activate a sequence of events with a single action. House modes can be named to represent general activity patterns common to most homes -- Awake, Asleep, Unoccupied, Vacation, etc. All can be programmed and changed to meet a home owner's needs. An example of a house mode is an AWAKE mode which can be programmed in the morning to do such things as: turn up the heat, turn up the water heater, change the security system settings, turn on the lights start the coffee and turn on the TV, etc. Thirdly, in a power outage, home owners will not able to use their system, which is the case with all electrical products, simply because electrical power is required in order for the SMART HOUSE system to operate. However, the system controller will re-boot itself when the power comes back on and the system's programming will be maintained. When the system fails, the home owners will be able to manually operate their home's products and appliances. The SMART HOUSE System is specifically designed so that if the system fails, the house still provides, at a minimum, all of the functionality provided by a conventionally wired home. For example, outlets will revert to what is called " Local control " so that they still provide power to anything plugged into them. In conclusion, SMART HOUSE System will be the new trend of the home construction in the following decades. It will make closer the relationship between computer and people. It seems to be supported by some people who believe in environment protection because it can reduce the waste in utility and save more money for people. It also saves times for people by the centralized system that can be controlled easily. 1 Some people think that it is difficult to find a relationship between home and computer. Usuall f:\12000 essays\technology & computers (295)\Society and the role that computers play in USA.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The microeconomic picture of the U.S. has changed immensely since 1973, and the trends are proving to be consistently downward for the nation's high school graduates and high school drop-outs. "Of all the reasons given for the wage squeeze - international competition, technology, deregulation, the decline of unions and defense cuts - technology is probably the most critical. It has favored the educated and the skilled," says M. B. Zuckerman, editor-in-chief of U.S. News & World Report (7/31/95). Since 1973, wages adjusted for inflation have declined by about a quarter for high school dropouts, by a sixth for high school graduates, and by about 7% for those with some college education. Only the wages of college graduates are up. Of the fastest growing technical jobs, software engineering tops the list. Carnegie Mellon University reports, "recruitment of it's software engineering students is up this year by over 20%." All engineering jobs are paying well, proving that highly skilled labor is what employers want! "There is clear evidence that the supply of workers in the [unskilled labor] categories already exceeds the demand for their services," says L. Mishel, Research Director of Welfare Reform Network. In view of these facts, I wonder if these trends are good or bad for society. "The danger of the information age is that while in the short run it may be cheaper to replace workers with technology, in the long run it is potentially self-destructive because there will not be enough purchasing power to grow the economy," M. B. Zuckerman. My feeling is that the trend from unskilled labor to highly technical, skilled labor is a good one! But, political action must be taken to ensure that this societal evolution is beneficial to all of us. "Back in 1970, a high school diploma could still be a ticket to the middle income bracket, a nice car in the driveway and a house in the suburbs. Today all it gets is a clunker parked on the street, and a dingy apartment in a low rent building," says Time Magazine (Jan 30, 1995 issue). However, in 1970, our government provided our children with a free education, allowing the vast majority of our population to earn a high school diploma. This means that anyone, regardless of family income, could be educated to a level that would allow them a comfortable place in the middle class. Even restrictions upon child labor hours kept children in school, since they are not allowed to work full time while under the age of 18. This government policy was conducive to our economic markets, and allowed our country to prosper from 1950 through 1970. Now, our own prosperity has moved us into a highly technical world, that requires highly skilled labor. The natural answer to this problem, is that the U.S. Government's education policy must keep pace with the demands of the highly technical job market. If a middle class income of 1970 required a high school diploma, and the middle class income of 1990 requires a college diploma, then it should be as easy for the children of the 90's to get a college diploma, as it was for the children of the 70's to get a high school diploma. This brings me to the issue of our country's political process, in a technologically advanced world. Voting & Poisoned Political Process in The U.S. The advance of mass communication is natural in a technologically advanced society. In our country's short history, we have seen the development of the printing press, the radio, the television, and now the Internet; all of these, able to reach millions of people. Equally natural, is the poisoning and corruption of these medias, to benefit a few. From the 1950's until today, television has been the preferred media. Because it captures the minds of most Americans, it is the preferred method of persuasion by political figures, multinational corporate advertising, and the upper 2% of the elite, who have an interest in controlling public opinion. Newspapers and radio experienced this same history, but are now somewhat obsolete in the science of changing public opinion. Though I do not suspect television to become completely obsolete within the next 20 years, I do see the Internet being used by the same political figures, multinational corporations, and upper 2% elite, for the same purposes. At this time, in the Internet's young history, it is largely unregulated, and can be accessed and changed by any person with a computer and a modem; no license required, and no need for millions of dollars of equipment. But, in reviewing our history, we find that newspaper, radio and television were once unregulated too. It is easy to see why government has such an interest in regulating the Internet these days. Though public opinion supports regulating sexual material on the Internet, it is just the first step in total regulation, as experienced by every other popular mass media in our history. This is why it is imperative to educate people about the Internet, and make it be known that any regulation of it is destructive to us, not constructive! I have been a daily user of the Internet for 5 years (and a daily user of BBS communications for 9 years), which makes me a senior among us. I have seen the moves to regulate this type of communication, and have always openly opposed it. My feelings about technology, the Internet, and political process are simple. In light of the history of mass communication, there is nothing we can do to protect any media from the "sound byte" or any other form of commercial poisoning. But, our country's public opinion doesn't have to fall into a nose-dive of lies and corruption, because of it! The first experience I had in a course on Critical Thinking came when I entered college. As many good things as I have learned in college, I found this course to be most valuable to my basic education. I was angry that I hadn't had access to the power of critical thought over my twelve years of basic education. Simple forms of critical thinking can be taught as early as kindergarten. It isn't hard to teach a young person to understand the patterns of persuasion, and be able to defend themselves against them. Television doesn't have to be a weapon against us, used to sway our opinions to conform to people who care about their own prosperity, not ours. With the power of a critical thinking education, we can stop being motivated by the sound byte and, instead we can laugh at it as a cheap attempt to persuade us. In conclusion, I feel that the advance of technology is a good trend for our society; however, it must be in conjunction with advance in education so that society is able to master and understand technology. We can be the masters of technology, and not let it be the masters of us. f:\12000 essays\technology & computers (295)\Software and Highschool.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ SOFTWARE AND HIGH SCHOOL The beginning of the 1990's is marked by the era of computers. Everywhere we look ,we see computers. They have become an essential part of our every day life. If the world's computer systems were turned off even for a short amount of time, unimaginable disasters would occur. We can surely say that today's world is heading into the future with the tremendous influence of computers. These machines are very important players in the game, the key to the success however is proper software (computer programs). It is the software that enables computers to perform a certain tasks. Educational systems in developed countries realize the importance of computers in the future world and therefore, emphasize their use in schools and secondary institutions. The proper choice of software is very important especially for beginners. Their first encounter with the computer should be exiting and fun. It should stimulate their interest in the computing field. First and foremost is the fact that computer software is a very important educational tool. Students in high schools experience computers for the first time through games and other entertaining software. These help develop youth's mental pathway in the way of logic, reflexes and the ability to make quick and concrete decisions [Lipcomb, 66]. The next step requires them to think more seriously about the machines. Secondary students learn the first steps in computer programming by creating simple programs. Here, the assistance of useful software is necessary. The computer software has many applications in the real world and is found virtually everywhere. The new generation of very fast computers introduces us to a new type of software. Multimedia is a of computer program that not only delivers written data for the user, but also provides visual support of the topic. By exploring the influence of multimedia upon high school students. I have concluded that the usage of multimedia have significantly increased students' interest in particular topics(supported by the multimedia). In order get these positive results, every child has to have a chance to use the technology on a daily basis [jacsuz@]. Mathematics is one of the scientific fields that has employed the full potential of computer power complicated problem solving. By using the computer, students learn to solve difficult problems even before they acquire tough mathematical vocabulary. The Geometer's Sketch pad, a kind of math software, is used in many Canadian high schools as a powerful math tutor. Students can pull and manipulate geometric figures and at the same time give them specific attributes. The next best feature of the software is a drawing document. It allows for easy drawing of perfect ellipses, rectangles and lines. Over all students' marks in the particular subject that have used helpful software have significantly increased. [mhurych@]. Computers have been used commercially for well over 50, their significant use in modern society however has never been so high. People rely on computers in every aspect of their lives. Medicine, engineering and other highly specialized fields of science use computers in their work. Computer education is very important. It builds the basis for future generations which will be more dependent on computers than we are today. The usage of computers depends mainly on the software. It is software that navigates computers through series of commands to a desired goal. Computer programs used in high schools must motivate students to study. The degree of difficulty of the computer software has to increase with the age of the user. Games are introduced first as icebreakers between children and machines. Later, more difficult software is used. Overall I think that computer software is very important tool in high school education.Drake (1987). ` f:\12000 essays\technology & computers (295)\Software Piracy A Big Crime With Big Consequences.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ [Error] - File could not be written... f:\12000 essays\technology & computers (295)\Software Piracy and its Effects.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Software Piracy and it's Effects Identification and Description of the Issue Copyright law are perhaps those laws which are breached the most by individual on a daily bases. This is because one might not know be informed about these law or because not much is done to enforce these law. Also some countries of the world have no Copyright laws. Software Piracy is a breach of a copyright law as one copies data contained on the medium on to another medium without the consent of the owner of the Software. When one buy a software one buys not the software content and therefore it isn't ones property. Instead one buy the license to use the software with accordance to the licensing agreement. Software companies invest a lot of time and money in creating a Software and the company rely upon the sales of the Software for it's survival. If illegal copies are made of Software the companies earns no money and could therefore be forced into bankruptcy. Software Piracy can be compared to robbing as one is stealing the goods of someone else and using it without paying for it. Up to 13 Billions dollars are lost in computer piracy yearly and in order to overcome these cost the company are force to rise the prices of their product. Brand name are properties of their respected companies and they have the right to protect their properties. Understanding of the IT background of the Issue Software is contained on disc or on a CD-ROM. Pirates copy can easily be made of Software on disc by copying from one disc to another disc. For CD-ROM one needs a CD-ROM burner or one copies the content onto a large hard disc and then on to a floppy disc. There are some underground bulletin boards ( BBS ) that contain pirate software. A user who logs on to one of these BBS can download Full version of Pirate Software provided one too can give something in return. On the Internet there are binary Newsgroup such as alt.binaries.warez, WWW pages and FTP sites that also contain Pirate Software. On the Newsgroup the Files are send upon request from anonymous users. As a result people who have access to the Internet can retrieve these Software Program free of charge. The person posting the Pirate software could be from a countries that has no copyright laws. These methods used in Software Piracy are hard to stop because of the fact it is done on the Internet and between individual form different counties. Buying one legitimate copy of a Software package and then offering it over an internal network so that it can be accessed by more than one individual at the same time on different computer is another form of Software Piracy. Analysis of the Impact of the Issue Software program is a service just like any other service the difference that this service come on a medium from which one can make copies. Software are could be judged as begin expensive but one wants them but doesn't want to pay for it or can't afford to therefore one could be judged as begin a theft. Office 97 from Microsoft required 3 years to develop and Microsoft invested Millions of Dollars. Microsoft will rely upon legitimate sale of this product for income and upon that judge on future versions of their product. If people don't pay for the program but make pirate copy Microsoft doesn't earn a cent and therefore could be force not to make future version of the product. This would mean that the Computer Industry growth would be halted and that one will not be expecting newer technology on Software as the Companies will not have the initiative to bring out better product if the don't get anything out of it. Unfortunately many people aren't able to see these as many pirates copies are made but also more importantly also used. Society doesn't see Software Pirates as thieves or criminals that could be because one to makes illegal copies in order not to pay. In Economical these means that if a companies can't make money it will eventually have to go bankrupt these would mean that people will lose their jobs. Solution to Problems arising from the Issue Software piracy will have to be fought on 2 different levels the first level will be the political and the second one will be on the technological level. On the political level one will have to pass tougher legislation against Software pirates. Also the government should makes regulation with other governments on Software piracy and encouraging them to be though on Software Pirates and try to combat them. Appropriate action should be taken against countries who fail to do so. On the technological level one should consider to insert "Harder" Copy protection on the Software product. One such method could be that once the Installation of the Program has been completed one has to register the product and receive a code and each code begin different according to different variables. Some Software program have copy protection one them but all of them are cracked very quickly by hackers and therefore it to unlikely that any new copy protection will not be cracked. To much copy protection could drive away legitimate consumers. Till now politician haven't really looked into the problem of Software piracy and Copyrights very thoroughly as they think that there are bigger problems to solve. Once though legislation are passes and people made aware that Software Piracy is a crime one could see a fall in Software Piracy. In dealing with other country involves a lot of bureaucracy but also a committed government. For anything to occur on the political level could take years for any effects to be seen. It has the greatest chance of solving the problem in the long run. In the technological side one could solve the problem only one a short term but implementation would be fast. Sources Internet Page WWW.pcworld.com/News December 96 Business Software Alliance Information Sheet on Software Piracy Computer Ethics Tom Forester &Perry Morrison Chapter3 "Software Theft" page 51-72 CNN Computer Connection December 96 - January 97 PC Magazine entire 96 Volume Reuters InfoWorld, Vol.19, No.6 Reuters 6 Feb. 97 Media Daily, Jan 30, 1997 Article on FBI crackdown on Software Pirates f:\12000 essays\technology & computers (295)\Software Piracy.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Almost everyday it seems , software companys keep pumping out brand new software that kills the day befores in that it is more sophisticated and more in tune with the needs of todays superusers , office users , and home users . Yet , at the same time , the software theft industry in growing at an even faster rate , costing software companies billions of dollars a year . The piece of shit government can put as many copyright laws in a book as they feel , but soon it will be virtually impossible to stop . Although computer illiteracy may still lurk by the thousands , computer intelligance lurks by the millions and even billions . We are going to bypass any laws you throw at us .There is no stopping it . America has gotta wake up , no matter what kind of warning you put out , or whatever other restrictions you try to enforce , there will always be another way . No matter what kind of encryption there will always be someone out there , wether it be me or the next guy , whose intelligence is greater then those who make the software . According to the federal government , that by the way has no real control over america since they can't even control themselves , software is protected from the moment of its creation . As soon as that software hits the store it is protected by the United States Federal Government . Yet , thousands of software titles have been put out there , and the government hasn't protected a fucking thing from happening . What a joke , how can we let such morons run this nation . The law in the USA states that a person may who buys the software may (I) copy it to a single computer and (II) make a copy for "archival purposes" . This also holds true in canada with the exception of the user only being able to make a backup copy instead of the USA law which is allowed for both archival and backup . In actuality , the government can not baby sit everyone who buys software . How are they gonna know when John Doe buys a copy of Duke Nukem 3D and wants to install on Jane Smith's computer so they can get some network games going on . Yea right , they have control . People who do get caught have a chance of being fined of up to a 1 million dollars ? Jesus , all I did was install a program . Ahh... the beauty of America . In a nutshell , the probability of someone stopping it from happening is slim to none . You take it away from the internet , we will goto mail , and so begins the cycle . Once you are so caught up in trying to stop one thing , we will go back and mess you up again . God bless America ! f:\12000 essays\technology & computers (295)\Solving Problem Creatively Over the Net.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Dishonest Me Since I got my internet privileges last 3 month, I had learn and encounter many weird and wonderful things. I have met the ugly side of internet and learnt something called "if you overspend your time limit, the phone bill gonna be very ugly."Perhaps the most interesting moment I encounter in the internet is when I discover homepage making. I made a homepage from learning HTML language from a web site. I want my homepage to be bold and simple but most of all animation-free. As a surfer myself, I know how it feels when entering a homepage that is full with high resolution graphic and animation. The animation had to be reload and reload again. Within 2 hours I managed to made myself a homepage.I also know to make an impressive homepage,one must have a high counter number so that people will revisit the homepage again. I can't use any "sensual" word to attract people cause it's against Geocities's rule. So I did a very nasty thing. I cheated, I put an extra counter number in my homepage beside my original counter number so each time when it reload it will look like this---->0101.While the only people who visited my homepage was myself, but instead the counter number show 101. MIRC The Solution When my PC suffered a data crash, I lost all my data. I lost all my e-mail address and most importantly my browser.The computer technician managed to repair my PC but he gave me an old version of Netscape.I have trouble using it in Win'95.So I downloaded the later version of Netscape.The downloading seize when it reaches 52%. I had to reload if I were to use Netscape.Instead I used MIRC to download the program because MIRC come with this neat feature that allow me to resume downloading where I left out. As a result, I get to continue my downloading at 17564 byte from a friend. I'm The Biggest Leech The rule of warez is-to download you must upload. The warez people even wrote a scipt to ban people who didn't upload when they download. To upload any program of mine to anyone will take forever, all the file I have is at least 6 MB long. So what I did was gather all my saved file and compress it, it sum up to 1.5 MB long and named it Ultra.zip.Then I sent it to the warez people. I found out that if you send the same file for the second time, the script will recognize it as a diffrent file and immediately add credit to my downloading account. As a result I got 9MB of credit within minutes when actually I sent 1.5MB. f:\12000 essays\technology & computers (295)\Speeding Up Windows 95.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ SPEEDING UP WINDOWS 95 Windows 95 with certain minor alterations and software upgrades can operate at a faster more efficient speed. With this Windows 95 tutorial, all the things you do now will be easier and faster, and what you always wanted to know is now here for you to learn. This tutorial will provide you with insightful instructional and informative tips about free programs such as TweakUI, and day to day maintenance OS needs. First, it is very important that you run Windows 95 with at least a high-end 486 (Pentium recommended), 8 megs of ram(adding more ram will increase overall performance), and at least 1 meg of video memory. Most of the following tips included here are for speedy application processes while others simply rewrites or bug fixes. One advantage Windows 95 has over its competitors is the user interface feature that comes built in with the operating system. User interface is a program within Windows 95 that allows customization of certain interface settings based on personal preference. About a year ago Microsoft released a small program called TweakUI that actually adds more flexibility and functionality to the already current user-friendly interface. TweakUI is actually a rewrite (bug fix) program that edits certain data files from the Windows 95 registry. With TweakUI running on your machine you can disable the following options which in turn will speed up your access time: windows animation, reboot start up, GUI interface, and last log on settings. TweakUI also adds a few nifty extras such as: smooth scroll, mouse enhancement, instant CD-ROM data load, and much more. Surprisingly enough TweakUI is offered free of charge to any WWW user and can be found at: http://www.microsoft.com or http://www.tucows.com. TweakUI is a definite must for any Windows 95 user looking to benefit the most from their home computer. No can argue that Windows 95 is the cleanest and most efficiently set up OS around. In fact, Windows 95 is by far the messiest OS to ever hit the market this decade. When compared to operating systems such as MacOS, OS2Warp, and Windows NT, Windows 95 finishes in dead last. This is due mainly to the fact that when installing or uninstalling a program in the Windows 95 environment, the program manager scatters files all over different parts of the file system (fixed disk directory). These scattered bits of files are often called leftovers (which is to be taken by definition of) which if left on your drive, cause extreme slow downs when you CPU is at work. Usually leftovers can be found in your c:/windows, c:/windows/system, or c:/windows/temp. The suffixed name for leftovers is as follows txt, old, log, ***,..., and tmp. Deletion of file leftovers make for faster access time and more hard disk space available. We've already seen several simple but effective ways to increase performance in the Windows 95 environment, but of all the most important is, disk defragmentation. Disk fragmentation is the breaking up of different access files all relative to certain programs installed on your fixed disk drive. Think of your fixed disk drive as a big completed jigsaw puzzle, which of moved, will break apart into several sub-puzzles. The same holds true for your fixed disk. When a program is installed it takes up the amount of disk space it needs to function correctly (usually the last available part of your drive). On the contrary, when a program is uninstalled it creates a space or hole on your fixed disk relative to where the program was before. Taking the same concept and applying it in terms of the jigsaw puzzle, we can clearly see what our fixed drive would physically look like. This is where disk defragmentation comes into play. It moves the rest of the currently installed programs on your drive from their current position to the position where the space is. Speed comes into play due to the fact that if you drive has never been defragmented, your CPU probably has to search in different areas of your physical drive for certain start up files. Disk Defragmentation comes with every version of Windows 95 and can usually be found by clicking the taskbar and highlighting the following: programs/accessories/system tools/disk defragmenter. Overall defragmentation increases performance by about 30 percent and make for a neater set up system. As discussed earlier, the addition of extra ram, faster processor, and a good video card make up a great conventional way of boosting the level of your performance, unfortunately the expense is never a pretty to hear. If you currently have the minimum required setup (high-end 486, 8 megs of ram, 1 meg of video memory), you should see some good effective results from this tutorial. However, if your system falls short of the minimum requirements, I would definitely recommend a hardware upgrade or the purchase of a newer more up to date machine. f:\12000 essays\technology & computers (295)\Spiderweb.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ #----------------------------------PLEASE NOTE---------------------------------# #This file is the author's own work and represents their interpretation of the # #song. You may only use this file for private study, scholarship, or research. # #----------------# song: Spiderwebs by : No Doubt Riff 1 e--------------------------- B--------------------------- G--------------------------- D--------------------------- A---6--6--5--5--3--3--1--0-- E--------------------------- Riff 2 e----------------------------------- B-----------------------------6~~--- every other time riff 2 G----------------------------------- is played, hit the high F D------8-------8-------8------------ as an artificial harmonic A----------------------------------- E--6-6---6---6---6---6---6--6------- Riff 3 e--------------------------------------- B--------------------------------------- G--------------------------------------- D------8-------8-----------------5--8--- A--------------------------5--8--------- E--6-6---6---6---6---6--8--------------- Intro (reggae beat) Bb F Gm Eb | Bb F Gm (riff 1) vi viii x vi vi viii iii Play Riff 2 twice Then into VERSE Riff 2 x 3 Riff 3 TAB FOR CHORDS Bb F Gm Eb Gm Bb F vi viii x vi iii i i e----6-------------6---3---1---1----- B----6---10---11---8---3---3---1----- G----7---10---12---8---3---3---2----- D----8---10---12---8---5---3---3----- A----8----8---10---6---5---1---3----- E----6-----------------3-------1----- PRECHORUS (8th notes) Eb F Bb Gm | Eb F slide f up.... vi viii vi iii vi viii CHORUS x 2 Bb F Gm Riff 1 i i iii VERSE Riff 2 x 3 Riff 3 PRECHORUS CHORUS x 2 CHORUS x 2 - with choppy offbeats BRIDGE Gm Eb iii vi CHORUS x 2 CHORUS x 2 - with choppy offbeats CHORUS x 6 (last time play riff 1 really slowly) (throughout the last choruses, play harmonics along the low frets of the E strings, a natural flange) OUTRO - with reggae beat of intro Bb F Gm Eb vi i iii vi Lyrics You think that we connect That the chemisty's correct Your words walk right through my ears Presuming I like what I hear And now I'm stuck in the web You're spinning You've got me for your prey... Sorry I'm not home right now I'm walking into spiderwebs So leave a message And I'll call you back A likely story, but leave a message And I'll call you back You take advantage of what's mine You're taking up my time Don't have the courage inside me To tell you please let me be Communication, telephonic invasion I'm planning my escape... CHORUS And It's all your fault I screen my phone calls No matter who calls I gotta screen my phone calls Now it's gone too deep You wake me in my sleep My dreams become nightmares 'Cause you're ringing in my ears CHORUS f:\12000 essays\technology & computers (295)\Starting a Business on the Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Mrs. -----, I understand that some students that have already graduated from College are having a bit of trouble getting their new businesses started. I know of a tool that will be extremely helpful and is already available to them; the Internet. Up until a few years ago, when a student graduated they were basically thrown out into the real world with just their education and their wits. Most of the time this wasn't good enough because after three or four years of college, the perspective entrepreneur either forgot too much of what they were supposed to learn, or they just didn't have the finances. Then by the time they save sufficient money, they again had forgotten too much. I believe I have found the answer. On the Internet your students will be able to find literally thousands of links to help them with their future enterprises. In almost every city all across North America, no matter where these students move to, they are able to link up and find everything they need. They can find links like "Creative Ideas", a place they can go and retrieve ideas, innovations, inventions, patents and licensing. Once they come up with their own products, they can find free expert advice on how to market their products. There are easily accessible links to experts, analysts, consultants and business leaders to guide their way to starting up their own business, careers and lives. These experts can help push the beginners in the right direction in every field of business, including every way to generate start up revenue from better management of personal finances to diving into the stock market. When the beginner has sufficient funds to actually open their own company, they can't just expect the customers to come to them, they have to go out and attract them. This is where the Internet becomes most useful, in advertising. On the Internet, in every major consumer area in the world, there are dozens of ways to advertise. The easiest and cheapest way, is to join groups such as "Entrepreneur Weekly". These groups offer weekly newsletters sent all over the world to major and minor businesses informing them about new companies on the market. It includes everything about your business from what you make/sell and where to find you, to what your worth. These groups also advertise to the general public. The major portion of the advertising is done over the Internet, but this is good because that is their target market. By now, hopefully their business is doing well, sales are up and money is flowing in. How do they keep track of all their funds without paying for an expensive accountant? Back to the Internet. They can find lots of expert advice on where they should reinvest their money. Including how many and how qualified of staff to hire, what technical equipment to buy and even what insurance to purchase. This is where a lot of companies get into trouble, during expansion. Too many entrepreneurs try to leap right into the highly competitive mid-size company world. On the Internet, experts give their secrets on how to let their companies natural growth force its way in. This way they are more financially stable for the rough road ahead. The Internet isn't always going to give you the answers you are looking for, but it will always lead you in the right direction. That is why I hope you will accept my proposal and make aware the students of today of this invaluable business tool. ?? f:\12000 essays\technology & computers (295)\Surfing the Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Chances are, anyone who is reading this paper has at one time, at least, surfed the net once. Don't worry if you haven't, I will explain everything you need to know about the Internet and the World Wide Web. Including how it started, it's growth, and the purpose it serves in today's society. The Internet was born about 20 years ago, as a U.S. Defense Department network called the ARPnet. The ARPnetwork was an experimental network designed to support military research. It was research about how to build networks that could withstand partial outages (like bomb attacks) and still be able to function. From that point on, Internet developers were responding to the market pressures, and began building or developing software for every conceivable type of computer. Internet uses started out with big companies, then to the government, to the universities and so on. The World Wide Web or WWW, is an information service that is on the Internet. The WWW is based on technology called hypertext, and was developed for physicist so they could send and retrieve information more easily. The WWW basically is a tool for exploring or surfing the Internet. The WWW is an attempt to organize the Internet so you can find information easier moving threw document to document. Why do I need to know this? Well now that I got threw all the techno-babble, let's get down to it. If you know how to utilize the Net, in just five minutes you could trade information and comments with millions of people all over the world, get a fast answer to any question imaginable on a scientific, computing, technical, business, investment, or any other subject. You could join over 11,000 electronic conferences, anytime, on any subject, you would be broadcasting your views , questions, and information to millions of other partic There has never been anything like it in the history of the world, and in this English class we've covered alot of history. At a growing rate of about 20% per month the Internet is only getting bigger and if people don't start utilizing it's resources they could be road kill on this Information Superhighway. Hey, I'll bet in the middle of that last sentence another computer just got on-line to the Net. There are three major features of the Internet, On-line discussion groups, Universal Electronic Mail, files and software. There's about 11,000 on-line discussion groups called Newsgroups, on most any topic you can imagine. If you are on the Net, you can participate in any of these discussions in any of these newsgroups. The next thing is Universal Electronic Mail or E-mail. E-mail is the biggest and cheapest system on the Net and is also one of it's biggest attractions. Since all commercial on-line services have something called "gateways" for sending and receiving electronic mail messages on the Internet, you're able to send and receive messages or files to anyone else who is on-line, anywhere in the world and in seconds. The third feature I mentioned was files and software. This in my opinion is the most impressive one. All the thousands of individual computer facilities connected to the Internet are also vast storage repositories for hundreds of thousands of software programs, information text files, video and sound clips, and other computer based resources. And their all accessible in minutes from any personal computer on-line to the Internet. So I could do all this stuff on the Internet, why should I take notice? Because of it's sheer size, volume of messages, and it's incredible monthly growth. From the latest statistics I was able to get, their are currently 30 million people who use the Internet worldwide. To try and put that number into perspective, that's over five times the size of CompuServe, America On-line, Prodigy, and all other on-line commercial information services combined. Or if you're not familiar with those services, it's more than the combined populations of New York City, London, and Moscow. Eri Just a few years ago, the Internet had a small exclusive domain of a small band of computer science students, university researchers, government defense contractors, and computer nerds. All of whom had free or cheap access through their universities or research labs. Because of the widespread free use, many people who used the Internet as students have demanded and received connections to the Internet from their employers as they got jobs in the outside world. Because of that, use of the Internet has expl The Internet is rapidly achieving a state of critical mass, attracting interest from huge numbers of personal computer users from non technical backgrounds. All these new Internet users are rapidly transforming the nerd orientated culture of the network and opening up the Internet to new and exciting possibilities. "I'm not sure threat is exactly the right word, but if you ignore the Internet, you do so at your own peril, the Internet is going to force a new way of doing business on some people." says Norman DeCarteret, senior systems analyst at Advantis. (A company that links other companies to the Internet. "Internet becomes the road more traveled as E-mail users discover no usage fee." Steve Stecklow, Wall Street Journal (9/2/93). Here are some good things about the Net and why you should be using it. People in all kinds of businesses and industries are sharing a wide spectrum of educational, business, and personal interest on the Net. Most, probably share a high enthusiasm for the Internet and want to send and receive e-mail messages. But also, one to one communications by newsgroups or electric mail is different and better than conventional letter writing or voice phone conversations in that the people you communicate with seem m You also have instant access to such a large, varied, and intelligent based individuals on the Net, which gives you the power of being able to get good information. When you ask a question on the Internet, you stand an excellent chance at getting at least one intelligent answer from someone who has gone threw the same experience. Whether it's advice on a paper you have to write, how to research a certain topic, or something of a personal interest, there's always someone on the Internet willing to share th Profit, this is something I thought I would throw in for all those entrepreneurs out there. A rapidly increasing number of companies and entrepreneurs are using the Internet to market and sell their products and services. When it's done in an informative way, and in good taste, and in the on-line areas designated for advertising orientated messages, most Internet users like to see announcements of new products and services. A growing number of companies are generating substantial sales of their products a But hey, the Internet isn't just for academics, business, and professional use. It could also be really fun! There are over 11,000 special interest on-line confrenceing areas called newsgroups, on the Internet. Many of these groups feature large, active, and sometimes raucous discussions on the widest imaginable range of interests, hobbies, and activities. Anything from antique cars, new business opportunities and personal investing to politics, gun control, sex, and The Simpsons. Participating in these Of course, like most other things, the Internet isn't all good and gloryice. You could say that the Internet is like the Wild West of the late 1800's. It's lawless, individualistic, brutal, and chaotic. And like any new frontier the Internet is not without it's problem's. If you decide you want to connect to the Internet, there are a few things you should know. The Internet can be pretty raw. That is, if you get a raw connection to the Internet, it lags behind modern personal computer interface technology by about 15 years. Without a good Windows or Macintosh based graphical software interface, also called a Web browser, to use all the features of the Internet you would need to know UNIX, a terse computer operating system command language that's a throwback to time sharing computer systems of the 1970's. For Internet access I would recommend you to go with an In The Internet has many powerful capabilities and an almost infinite range of information and communication power, all of which can never be adequately covered in any one paper or book. All the information in this paper came from hard copy sources to show you don't have to get on the Net to find out about the Net. Work Cited : Cagnon, Eric. What's on the Internet : Berkeley : Peach Pit Press. 1995 Krol, Edward. The Whole Internet : User's Guide and Catalog. Sebastopol : O'Reilly Ass, Inc. 1992 Internet World Magazine. On Internet 94. Westport : Mecklermedia Ltd. 1994 Newby, Gregory B. Directory of Directories on the Internet : Westport : Mecklemedia Ltd. 1994 Carmen, John. "The New Wave of the Internet." Wall Street Journal : 9/2/93 Michael LaCroix Eng 101 Dr. Sonnchein 4/10/96 f:\12000 essays\technology & computers (295)\Technology Advances.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ My report is on Computer Games and the advancements in technology. I am very intersted in this field because of the rapid change in out society that pretty much requires a person to own a computer. Whenever there is work, there must be pleasure; thus resulting in computer games. In the beginning there were games like "Pong", single pixel tennis. On each end of the screen there were two bars and the object was to hit a square pixel back and forth in an attempt to score. These types of games were good, but as technology advanced , graphics and sound were in demand. From the ATARI came NINTENDO ( I am skipping a few minute advances in technology like the ODDESY) Then Nintendo, which dominated the market at the time, soon had competition with SEGA. Both of these systems were 16 bit. Theses machines still weren't enough to satisfy consumers for a while so thay came out with the most significant change yet. The change from cartridges to CD's. I believe the first one to use CD technology was 3DO. The 3DO was now the item on evey childs mind. The 3DO featured stunning 3D Graphics as well as the quality sound you recieved froom AUDIO CD's. The only reason this machine did not dominate the market was it's price tag, a whopping 300$. Alot to pay for your childs (or husbands) entertainment. The only prblem I find with sytems like the SEGA, NINTENDO, and 3DO is the lack of variety. When PC's became sensible in the home there was really no comparison exept in the price. 2,000$ for a PC or 300$ for a 3DO the difference is quite clear. I hope that this essay has been informative. f:\12000 essays\technology & computers (295)\Technology effects modern America.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ How Technology Effects Modern America The microeconomics picture of the U.S. has changed immensely since 1973, and the trends are proving to be consistently downward for the nation's high school graduates and high school drop-outs. "Of all the reasons given for the wage squeeze - international competition, technology, deregulation, the decline of unions and defense cuts - technology is probably the most critical. It has favored the educated and the skilled," says M. B. Zuckerman, editor-in-chief of U.S. News & World Report (7/31/95). Since 1973, wages adjusted for inflation have declined by about a quarter for high school dropouts, by a sixth for high school graduates, and by about 7% for those with some college education. Only the wages of college graduates are up. Of the fastest growing technical jobs, software engineering tops the list. Carnegie Mellon University reports, "recruitment of it's software engineering students is up this year by over 20%." All engineering jobs are paying well, proving that highly skilled labor is what employers want! "There is clear evidence that the supply of workers in the [unskilled labor] categories already exceeds the demand for their services," says L. Mishel, Research Director of Welfare Reform Network. In view of these facts, I wonder if these trends are good or bad for society. "The danger of the information age is that while in the short run it may be cheaper to replace workers with technology, in the long run it is potentially self-destructive because there will not be enough purchasing power to grow the economy," M. B. Zuckerman. My feeling is that the trend from unskilled labor to highly technical, skilled labor is a good one! But, political action must be taken to ensure that this societal evolution is beneficial to all of us. "Back in 1970, a high school diploma could still be a ticket to the middle income bracket, a nice car in the driveway and a house in the suburbs. Today all it gets is a clunker parked on the street, and a dingy apartment in a low rent building," says Time Magazine (Jan 30, 1995 issue). However, in 1970, our government provided our children with a free education, allowing the vast majority of our population to earn a high school diploma. This means that anyone, regardless of family income, could be educated to a level that would allow them a comfortable place in the middle class. Even restrictions upon child labor hours kept children in school, since they are not allowed to work full time while under the age of 18. This government policy was conducive to our economic markets, and allowed our country to prosper from 1950 through 1970. Now, our own prosperity has moved us into a highly technical world, that requires highly skilled labor. The natural answer to this problem, is that the U.S. Government's education policy must keep pace with the demands of the highly technical job market. If a middle class income of 1970 required a high school diploma, and the middle class income of 1990 requires a college diploma, then it should be as easy for the children of the 90's to get a college diploma, as it was for the children of the 70's to get a high school diploma. This brings me to the issue of our country's political process, in a technologically advanced world. Voting & Poisoned Political Process in The U.S. The advance of mass communication is natural in a technologically advanced society. In our country's short history, we have seen the development of the printing press, the radio, the television, and now the Internet; all of these, able to reach millions of people. Equally natural, is the poisoning and corruption of these medias, to benefit a few. From the 1950's until today, television has been the preferred media. Because it captures the minds of most Americans, it is the preferred method of persuasion by political figures, multinational corporate advertising, and the upper 2% of the elite, who have an interest in controlling public opinion. Newspapers and radio experienced this same history, but are now somewhat obsolete in the science of changing public opinion. Though I do not suspect television to become completely obsolete within the next 20 years, I do see the Internet being used by the same political figures, multinational corporations, and upper 2% elite, for the same purposes. At this time, in the Internet's young history, it is largely unregulated, and can be accessed and changed by any person with a computer and a modem; no license required, and no need for millions of dollars of equipment. But, in reviewing our history, we find that newspaper, radio and television were once unregulated too. It is easy to see why government has such an interest in regulating the Internet these days. Though public opinion supports regulating sexual material on the Internet, it is just the first step in total regulation, as experienced by every other popular mass media in our history. This is why it is imperative to educate people about the Internet, and make it be known that any regulation of it is destructive to us, not constructive! I have been a daily user of the Internet for 5 years (and a daily user of BBS communications for 9 years), which makes me a senior among us. I have seen the moves to regulate this type of communication, and have always openly opposed it. My feelings about technology, the Internet, and political process are simple. In light of the history of mass communication, there is nothing we can do to protect any media from the "sound byte" or any other form of commercial poisoning. But, our country's public opinion doesn't have to fall into a nose-dive of lies and corruption, because of it! The first experience I had in a course on Critical Thinking came when I entered college. As many good things as I have learned in college, I found this course to be most valuable to my basic education. I was angry that I hadn't had access to the power of critical thought over my twelve years of basic education. Simple forms of critical thinking can be taught as early as kindergarten. It isn't hard to teach a young person to understand the patterns of persuasion, and be able to defend themselves against them. Television doesn't have to be a weapon against us, used to sway our opinions to conform to people who care about their own prosperity, not ours. With the power of a critical thinking education, we can stop being motivated by the sound byte and, instead we can laugh at it as a cheap attempt to persuade us. I feel that the advance of technology is a good trend for our society; however, it must be in conjunction with advance in education so that society is able to master and understand technology. We can be the masters of technology, and not let it be the masters of us. Bibliography Where have the good jobs gone?, By: Mortimer B. Zuckerman U.S. News & World Report, volume 119, pg. 68 (July 31, 1995) Wealth: Static Wages, Except for the Rich, By: John Rothchild Time Magazine, volume 145, pg. 60 (January 30, 1995) Welfare Reform, By: Lawrence Mishel http://epn.org/epi/epwelf.html (Feb 22, 1994) 20 Hot Job Tracks, By: K.T. Beddingfield, R. M. Bennefield, J. Chetwynd, T. M. Ito, K. Pollack & A. R. Wright U.S. News & World Report, volume 119, pg. 98 f:\12000 essays\technology & computers (295)\Technology in our Society.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ No doubt, technology is increasingly important in the modern world. It is amazing how fast technology has been developed. Nearly every major advance was invented in the last century. These invention are always planned for a positive result, however the negative effects often do not become apparent until after the event. These effects will be deal in the following paragraphs with related materials. The text, "Whose Life is it Anyway?", by Brian Clark, has clearly illustrated that with the development of medical technology, people can now have a better quality of life. Moreover, many lives which normally would not survive without the advance in medical treatment can now be artificially prolonged. The central character, Ken Harrison, who becomes a quadriplegic after a car accident, has met this situation. Nevertheless, it is cruel to ask him to face this life if he does not desire to. He can no longer sculpt, run, move, kiss or have any form of sexual fulfillment. Obviously, his normal life has drifted away. The tendency to sustain people's lives, just because the technology is available, is intolerance under certain circumstances. It is the individual patient who must make a decision about whether to keep himself alive. "What is the point of prolonging a person's biological life if it is obtained at the cost of a serious assault on that person's liberty?" There is probably no simple answer for this question. Any patient's decision should be respected, not based on the fact of all available technologies. This medical technology has the potential for both good and bad results. However, it is very important in today's society. "Insurance in the Genes" is a piece of valuable material which explores another area in the technological field. Nowadays, genetic engineering essentially plays an important role. Genetic testing can predict a person's biological use-by date, forecasting everything from heart attacks to breast cancer. People can therefore have a basic concept of their health situation and prevent what is going to happen if technology allows them to know this beforehand. "Up until now, only 50 genetic tests have been developed to detect diseases. But within a decade, there will be tests for 5000 diseases." It is a remarkable increase. In the near future, hopefully, genetic testing will be employed to reveal potential health risks. It is a positive effect of technology in the modern world. Another useful source for the effects of technology in our world is the documentary. On 23 April 1996, SBS broadcasted a film entitled "Weapon: A Battle for Humanity". It recorded that landmines and laser weapons are devils. Evidently, mines do not just shatter individual lives, they also shatter whole communities. In World War II, mines were used to be defensive weapons. However, they do not just only kill soldiers, but also farmers farming, children playing and women collecting food. People in the past or even now have complained about their existence. Laser weapons have been abused in military field. Militarism plans to install these weapons in war. Their power have been recognized that under a certain condition, laser weapons can result in losing sight. No medical science today can actually give sight back. Weapons should only be objects of defense. However, because of the advance of technology, they have become more and more powerful. Scientists clearly know that misusing weapons will result in deaths, but they are still working towards more powerful weapons which can result in even more death. Why is this? Weapons lead to homelessness, disasters, sacrifices and death. This study of the development of landmines and laser weapons shows that technology can be used for destructive and immoral reasons. It is shocking to know that the USA, a peaceful nation and a member of the United Nations, has spent more than two-thirds of its research and development finance on military projects in the 1980s. My personal experience has inspired in me a lot of understanding of this issue. In today's society, communication and transport are significant features. Over the last decade, their developments in technology are rapidly increasing. People who want to go to other countries can travel by airplane; and people who want to communicate with friends overseas can use the telephone, fax or Internet. Not only in Australia, but also in other developing countries, Internet has become more and more common. With the use of Internet, I can now travel all over the world without stepping out of my door. Most importantly, a large amount of money is saved and having Internet is important to me. Internet has taken communication a further step: all information is totally accessible to any who owns this form of technology. It opens up a new international community which is positive and should lead to a peaceful modern world. So in this world today, technology is perhaps the most important driving force of our society, creating dilemmas concerning life and death, changing nature with genetic engineering, developing such immoral weapons and the instant advantages of using Internet. f:\12000 essays\technology & computers (295)\Telecommunication.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Telecommunication 1. Introduction Computer and telephone networks inflict a gigantic impact on today's society. From letting you call John in Calgary to letting you make a withdraw at your friendly ATM machine they control the flow of information. But today's complicated and expensive networks did not start out big and complicated but rather as a wire and two terminals back in 1844. From these simple networks to the communication giants of today we will look at the evolution of the network and the basis on which it functions. 2. The Beginnings 2.1. Dot Dot Dot Dash Dash Dash Dot Dot Dot The network is defined as a system of lines or structures that cross. In telecommunications this is a connection of peripherals together so that they can exchange information. The first such exchange of information was on May 24, 1844 when Samuel Morse sent the famous message "What hath God wrought" from the US Capitol in Washington D.C. across a 37 mile wire to Baltimore using the telegraph. The telegraph is basically an electromagnet connected to a battery via a switch. When the switch is down the current flows from the battery through the key, down the wire, and into the sounder at the other end of the line. By itself the telegraph could express only two states, on or off. This limitation was eliminated by the fact that it was the duration of the connection that determined the dot and dash from each other being short and long respectively. From these combinations of dots and dashes the Morse code was formed. The code included all the letters of the English alphabet, all the numbers and several punctuation marks. A variation to the telegraph was a receiving module that Morse had invented. The module consisted of a mechanically operated pencil and a roll of paper. When a message was received the pencil would draw the corresponding dashes and dots on the paper to be deciphered later. Many inventors including Alexander Bell and Thomas Edison sought to revolutionize the telegraph. Edison devised a deciphering machine. This machine when receiving Morse code would print letters corresponding to the Morse code on a roll of paper hence eliminating the need for decoding the code. 2.2. Mr. Watson, Come Here! The first successful telephone was invented by Alexander Graham Bell. He along with Elisha Gray fought against time to invent and patent the telephone. They both patented their devices on the same day-February 14, 1876- but Bell arrived a few hours ahead of gray thus getting the patent on the telephone. The patent issued to Bell was number 174,465, and is considered the most valuable patent ever issued. Bell quickly tried to sell his invention to Western Union but they declined and hired Elisha Gray and Thomas Edison to invent a better telephone. A telephone battle began between Western Union and Bell. Soon after Bell filed suit against Western Union and won since he had possessed the basic rights and patents to the telephone. As a settlement Western Union handed over it's whole telephone network to Bell giving him a monopoly in the telephone market. During his experiments to create a functional telephone Bell pursued two separate designs for the telephone transmitter. The first used a membrane attached to a metal rod. The metal rod was submerged in a cup of mild acid. As the user spoke into the transmitter the membrane vibrated which in turn moved the rod up and down in the acid. This motion of the rod in the acid caused variations in the electrical resistance between the rod and the cup of acid. One of the greatest drawbacks to this model was that the cup of acid would have to be constantly refilled. The second of Bell's prototypes was the induction telephone transmitter. It used the principle of magnetic induction to change sound into electricity. The membrane was attached to a metal rod which was surrounded by a coil of wire. The movement of the rod in the coil produced a weak electric current. An advantage was that theoretically it could also be used both as a transmitter and a receiver. But since the current produced was so weak, it was unsuccessful as a transmitter. Most modern day telephones still use a variation of Bell's design. The first practical transmitter was invented by Thomas Edison while he was working for the Western Union. During his experiments Edison noticed that certain carbon compounds change their electrical resistance when subjected to varying pressure. So he sandwiched a carbon button between a metal membrane and a metal support. The motion of the membrane changed the pressure on the carbon button, varying the flow of electricity through the microphone. When the Bell Vs. Western Union lawsuit was settled the rights to this transmitter were also taken over by Bell. 2.3. Please Wait, I'll Connect You. The first network of telephones consisted of switchboards. When a customer wanted to place a call he would turn a crank on his telephone terminal at home. This would produce a current through the line. A light at the switchboard would light up. The caller would tell the operator where he wanted to call and she would connect him by means of inserting a plug into a jack corresponding to the desired phone. In earlier years he found that he could use the ground as the return part of the circuit, but this left the telephone very susceptible to interference from anything electrical. So in the mid 1880s Bell realized that he would have to change the telephone networks from one wire to two wire. In 1889 Almon Brown Strowger invented the telephone dial which eliminated the use for telephone operators. 2.4. The Free Press Reported That President Carter....... French inventor Emile Baudot created the first efficient printing telegraph. The printing telegraph was the first to use a typewriter like keyboard and allowed eight users to use the same line. More importantly, his machines did not use Morse code. Baudot's five level code sent five pulses for each character transmitted. The machines did the encoding and decoding, eliminating the need for operators. After some improvements by Donald Murray the rights to the machine were sold to Western Union and Western Electric. The machine was named the teletypewriter and was also known by it's nickname TTY. A service called telex was offered by Western Union. It allowed subscribers to exchange typed messages with one another. 3. From The Carterfone to the 14,400 3.1. I'll Patch Her Up On The Carterfone, Captain. The first practical computers used the means of punched cards as a method of storing data. These punched cards held 80 characters each. They dated back to the mechanical vote-counting machine invented by Hermen Hollerith in 1890. But this type of computer was very hard and expensive to operate. They were very slow in computing speed and the punch cards could be very easily lost or destroyed. One of the first VDTs (Video Display Terminal) was the Lear-Siegler ADM-3A. It could display 24 lines of 80 characters each (a remarkable feat of technology). One of the regulations that AT&T passed was that no other company's equipment could be physically connected to any of it's lines or equipment. This meant that unless AT&T invented a peripheral it would not be legal to connected to the telephone jack. In 1966 a small Texas company called Carterfone invented a simple device that could go around these regulations. The Carterfone allowed for a company's radio to be connected to the telephone system. The top portion of the Carterfone consisted of molded plastic. When a radio user needed to use the telephone, the radio operator at the base station placed the receiver in the Carterfone and dialed the number. This allowed the user to call through the radio. AT&T challenged the integrity of the Carterfone on the phone lines and lost the battle in court. In 1975 the FCC passed Part 68 rules. They were specifications that, if met would allow third party companies to sell and hook up their equipment to the telephone network. This turned the telephone industry upside down and challenged AT&T's monopoly in the telephone business. 3.2. So Gentelmen A' Will Be 65 With more and more electronic communication and the invention of VDTs the shortcomings of the Baudot code were realized. So in 1966, several telecommunications companies devised a replacement for the Baudot code. The result was the American Standard Code for Information Interchange, or ASCII. ASCII uses 7 bits of code, allowing it to represent 128 characters without a shift code. The code defined 96 printable characters (A through Z in upper- and lowercase, numbers from 0 to 9, and various punctuation marks) and several control characters such as carriage return, line feed, backspace etc. ASCII also included an error checking mechanism. An extra bit, called the parity bit, is added to each character. When in even parity mode, the bit would have a value of one if there was an even number of ones and zero if there was an odd number of ones. IBM invented it's own code which used 8 bits of code giving 256 character possibilities. The code was called EBCDIC, for Extended Binary Coded Decimal Interchange Code and was not sequential. The Extended ASCII was designed so that PCs could again attain compatibility with the IBM machines. The other upper 128 characters of the EASCII code include pictures such as lines, hearts and scientific notation. In 1969 guidelines were set for the construction of serial ports. The RS-232C standard was established to define a way to move data over a communications link. The RS-232C is commonly used to transmit ASCII code but can also transmit Baudot and EBCDIC data. The connector normally uses a 25 pin D shell connector with a male plug on the DTE (Data Terminal Equipment) and a female plug on the DCE (Data Communications Equipment). 3.3. Hello Joshua, Would You Like To Play A Game... In the 1950s a need arose to connect computer terminals across ordinary telephone lines. This need was fulfilled by AT&T's Bell 103 modem. A modem (modulator/demodulator) is used to convert the on-off digital pulses of computer data into on-off analog tones that can be transmitted over a normal telephone circuit. The Bell 103 operated at a speed of 300 bits per second, which at that time was more than ample for the slow printing terminals of the day. The Bell 103 used two pairs of tones to represent the on-off states of the RS-232C data line. One pair for the modem that is calling and the other pair for the modem answering the call. The calling modem sends data by switching between 1070 and 1270 hertz, and the answering modem by switching between 2025 and 2225 hertz. The principle on which the Bell 103 operated is still in use today. During the sixties and seventies the concept of mainframe networks arose. A mainframe consisted of a very powerful computer to which thousands of terminals were connected. The mainframe worked on a timesharing process. Timesharing was when many users on terminals could use limited amounts of the host computer's resources, thus letting many parties access the host at the same time. This type of network, however, was very expensive, and since on time sharing you could only use small amounts of the host's total computing power (CPU), the use of the terminal was slow and sluggish. In the late seventies the personal computer was introduced to the public. A personal computer consisted of a monitor, a keyboard, a CPU (Central Processing Unit), and various other connectors and memory chips. The good things about PCs were that they did not have to share their CPU and that the operating costs of these systems were much less that that of their predecessors. The computers could, with a software package, emulate terminals, and be connected to the mainframe network. Bell laboratories came up with the 212a unit which operated at the speed of 1200 bits per second. This unit, however, was very susceptible to noise interference. 3.4. Hey Bell! I Can Hang Myself Up! After the breakup of the AT&T empire that controlled the modem industry, many other companies started to create new designs of modems. Hayes Microcomputer Products, took the lead in the PC modem business. Hayes pioneered the use of microprocessor chips inside the modem itself. The Hayes Smartmodem, introduced in 1981, used a Zilog Z-8 CPU chip to control the modem circuitry and to provide automatic dialing and answering. The Hayes unit could take the phone off the hook, wait for the dialtone, and dial a telephone number all by itself. The Hayes Smartmodems sometimes had more powerful CPUs than the computers that they were connected to. The next advancement was the invention of the 2400 bits per second modem. The specifications came from the CCITT, an industry standard setting organization composed of hundreds of companies world wide. The new standard was designated as V.22bis and is still in use today. Other CCITT standards that followed were the V.32 (9600 bps), the V.32bis (14400 bps), the V42 (error control), and the V42bis (data compression). Virtually all modems today conform to these standards. The next big computer invention was the fax modem. It uses the on-off data transmission just as a modem but for the purpose of creating a black and white image. Each on-off signal represents a black or white area on the image. The image is sent as a set of zeros and ones and is then reconstructed on the receiving end. 4. LANs 4.1. I Donnwanna File-Share! Network Operating Systems (OS) are actually a group of programs that give computers and peripherals the ability to accept requests for service across a network and give other computers the ability to correctly use those services. Servers share their hard disks, attached peripherals such as printers and optical drives, and communication devices. They inspect requests for proper authorization, check for conflicts and errors and then perform the requested service. There is a multitude of different types of servers. File servers are equipped with large hard drives that are used to share files and information, as well as whole applications. The file-server software allows shared access to specific segments of the data files under controlled conditions. Print servers accept print jobs sent by anyone on the network. These servers are equipped with spooling software (saving data to disk until the printer is ready to accept it) that is vital in the situations where many requests can pour in at the same time. Network Operating Systems package requests from the keyboard and from applications in a succession of data envelopes for transmission across the network. For example, Novell's NetWare will package a directory request in an IPX (Internetwork Packet Exchange) packet, and the LAN adapter will then package the IPX request into an Ethernet frame. In each step information about data and error control data is added to the packet. 4.2. Eight Go In One Comes Out The Network Interface Card or LAN adapter, is an interface between the computer and the network cabling. Within the computer it is responsible for the movement of data between the RAM (Random Access Memory) and the card itself. Externally it is responsible for the control of the flow of data in and out of the network cabling system. Since typically computers are faster than the network, the LAN adapter must also function as a buffer between the two. It is also responsible for the change of the form of data from a wide parallel stream coming in eight bits at a time to a narrow stream moving one bit at a time in and out of the network port. To handle these tasks the LAN adapters are equipped with a microprocessor and 8-64K of RAM. Some of the cards include sockets for ROM chips called Boot ROM. These chips allow computers without hard drives to boot operating systems from the file server. 4.3. Take Your Turn! Ethernet and Token Ring network adapters use similar systems of electrical signaling over the network cable. These signals are very similar to the Baudot and Morse codes. A technique called Manchester encoding uses voltage pulses ranging from -15v to +15v in order to transmit the zeros and ones. The network cable has only one drawback, it can only carry signals from one network card at a time. So each LAN architecture needs a media-access control (MAC) scheme in order to make the network cards take turns transmitting into the cable. Ethernet cards listen to the traffic on the cable and transmit only if there is a break in the traffic when the channel is quiet. This technique is called Carrier-Sense Multiple Access With Collision detection (CSMA/CD). With collision detection, if two cards start transmitting at the same time, they see the collision, stop, and resume some time later. Token Ring networks use a much more complex process called token passing. Token Ring cards wait for permission in order to transmit into the cable that forms an electrical loop. The cards use their serial numbers in order to find the master interface card. This card starts a message called a token. When a card with information to send receives the token, it sends the data across the network. After the addressed interface card receives the information and returns it to the originating card, the token is given back to the master to be passed onto the next card. The ARCnet network uses a very similar system to that of the Token Ring. Instead of using a token, the master card keeps a table of all active cards and polls each one in turn, giving permission to transmit. 4.4. Tied In A Knot Various types of cabling are used to connect the LAN adapters to the servers. Unshielded twisted pair wires offer rather slow speed, are very inexpensive, are small, and can only span very short distances. These cables use the RJ-45 connector. Coaxial cable offers fast speed, is rather expensive, has a medium sized diameter, and can span medium distances. Coaxial cable uses BNC connectors. The shielded twisted pair cable offers fast speed, is more expensive than the coaxial cable, has a large diameter, and can only span short distances. These cables use the IBM data connector. The fiber optic cable is the fastest possible type of data transfer, costs astronomical amounts of money, has a tiny diameter, and can span very long distances. This cable uses the ST fiber optic connector. Wiring hubs are used as central points for the cables from the network interface cards. 5.5. Loves Me, Loves Me Not, Server Based, Peer To Peer... There are two general types of LANs. The Server-based networks rely on one major server to store data, offer access to perhiperals, handle the printing and accomplish all the work associated with network management. The Server-based networks have a high start up cost, but offer high security as well as ease of operation. These networks become more economical as more computers are added to the network. In Peer to peer networks the network responsibilities are divided among many computers. Some act as file servers, others as print servers, some as CD-ROM servers, tape drive servers, etc. The Startup cost of these networks is much cheaper, but when more computers are added to the network, some of the servers may not be able to handle the extra activity. 5. Links Between LANs 5.1. She Just Won't Send Sysop! Most networks have very short information transfer ranges. But, in an ever shrinking world the need for links between LANs has never been higher. This section will explain the components and information needed to link LANs. When an electric current travels over a long length, it's charge decreases, and it is susceptible to electromagnetic interference. To combat the length problem a component has been devised. A repeater is a little box that is inserted between a cable. It's primary function is to amplify the weakening pulse and send it on it's way. Bridges are used to analyze the station address of each Ethernet packet and determine the destination of the message. The Routers strip the outer Ethernet packets of a data packet in order to get the data. This data is sent to other routers in other places of the world and then repackaged by those routers. The removal of the excess data packets by the routers decreases the time required to transfer that data. If networks use the same addressing protocol, bridges can be used to link them, however, if they use different addressing protocols, only routers may be used. During these times MANs (Metropolitan Area Networks) are in use and development today. These use routers that are connected preferably via a fiber optic cable, to create one large network. 5.2. Pluto Calling Earth! Any networks larger than 1000m typically rely on telephone digital lines for data transfer. These networks are called Circuit Switched Digital Networks . Circuit Switched Digital Networks utilize a switching matrix at the central office of a telephone company that connects local calls to long distance services. The Telephone companies now offer dial up circuits with signaling rates of 56, 64, and 384 kilobits per second as well as 1.544 megabits per second. Another type of LAN to LAN connections are packet switching networks. These are services that a network router calls up on a digital line. They consist of a group of packet switches that are connected via intraswitch trunks (usually fiber optic) that relay addressed packets of information between them. Once the packet reaches the destination packet switch, it sends it via another digital connection to the receiving router. f:\12000 essays\technology & computers (295)\Telecommunications.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Telecommunications The transmission of words, sounds, images, or data in the form of electronic or electromagnetic signals or impulses. Transmission media include the telephone (using wire or optical cable), radio, television, microwave, and satellite. Data communication, the fastest growing field of telecommunication, is the process of transmitting data in digital form by wire or radio. Digital data can be generated directly in a 1/0 binary code by a computer or can be produced from a voice or visual signal by a process called encoding. A data communications network is created by interconnecting a large number of information sources so that data can flow freely among them. The data may consist of a specific item of information, a group of such items, or computer instructions. Examples include a news item, a bank transaction, a mailing address, a letter, a book, a mailing list, a bank statement, or a computer program. The devices used can be computers, terminals (devices that transmit and receive information), and peripheral equipment such as printers (see Computer; Office Systems). The transmission line used can be a normal or a specially purchased telephone line called a leased, or private, line (see Telephone). It can also take the form of a microwave or a communications-satellite linkage, or some combination of any of these various systems. Hardware and Software Each telecommunications device uses hardware, which connects a device to the transmission line; and software, which makes it possible for a device to transmit information through the line. Hardware Hardware usually consists of a transmitter and a cable interface, or, if the telephone is used as a transmission line, a modulator/demodulator, or modem. A transmitter prepares information for transmission by converting it from a form that the device uses (such as a clustered or parallel arrangement of electronic bits of information) to a form that the transmission line uses (such as, usually, a serial arrangement of electronic bits). Most transmitters are an integral element of the sending device. A cable interface, as the name indicates, connects a device to a cable. It converts the transmitted signals from the form required by the device to the form required by the cable. Most cable interfaces are also an integral element of the sending device. A modem converts digital signals to and from the modulated form required by the telephone line to the demodulated form that the device itself requires. Modems transmit data through a telephone line at various speeds, which are measured in bits per second (bps) or as signals per second (baud). Modems can be either integral or external units. An external unit must be connected by cable to the sending device. Most modems can dial a telephone number or answer a telephone automatically. Software Among the different kinds of software are file-transfer, host, and network programs. File-transfer software is used to transmit a data file from one device to another. Host software identifies a host computer as such and controls the flow of data among devices connected to it. Network software allows devices in a computer network to transmit information to one another. Applications Three major categories of telecommunication applications can be discussed here: host-terminal, file-transfer, and computer-network communications. Host-Terminal In these types of communications, one computer-the host computer-is connected to one or more terminals. Each terminal transmits data to or receives data from the host computer. For example, many airlines have terminals that are located at the desks of ticket agents and connected to a central, host computer. These terminals obtain flight information from the host computer, which may be located hundreds of kilometers away from the agent's site. The first terminals to be designed could transmit data only to or from such host computers. Many terminals, however, can now perform other functions such as editing and formatting data on the terminal screen or even running some computer programs. Manufacturers label terminals as "dumb," "smart," or "intelligent" according to their varying capabilities. These terms are not strictly defined, however, and the same terminal might be labeled as dumb, smart, or intelligent depending upon who is doing the labeling and for what purposes. File-Transfer In file-transfer communications, two devices are connected: either two computers, two terminals, or a computer and a terminal. One device then transmits an entire data or program file to the other device. For example, a person who works at home might connect a home computer to an office computer and then transmit a document stored on a diskette to the office computer. An outgrowth of file transfer is electronic mail. For example, an employee might write a document such as a letter, memorandum, or report on a computer and then send the document to another employee's computer. Computer-Network In computer-network communications, a group of devices is interconnected so that the devices can communicate and share resources. For example, the branch-office computers of a company might be interconnected so that they can route information to one another quickly. A company's computers might also be interconnected so that they can all share the same hard disk. The three kinds of computer networks are local area networks (LAN), private branch exchange (PBX) networks, and wide-area networks (WAN). LANs interconnect devices with a group of cables; the devices communicate at a high speed and must be in close proximity. A PBX network interconnects devices with a telephone switching system; in this kind of network, the devices must again be in close proximity. In wide-area networks, on the other hand, the devices can be at great distances from one another; such networks usually interconnect devices by means of telephone. Telecommunication Services Public telecommunication services are a relatively recent development in telecommunications. The four kinds of services are network, information-retrieval, electronic-mail, and bulletin-board services. Network A public network service leases time on a WAN, thereby providing terminals in other cities with access to a host computer. Examples of such services include Telenet, Tymnet, Uninet, and Datapac. These services sell the computing power of the host computer to users who cannot or do not wish to invest in the purchase of such equipment. Information-Retrieval An information-retrieval service leases time on a host computer to customers whose terminals are used to retrieve data from the host. An example of this is CompuServe, whose host computer is accessed by means of the public telephone system. This and other such services provide general-purpose information on news, weather, sports, finances, and shopping. Other information-retrieval services may be more specialized. For example, Dow Jones News Retrieval Services provide general-purpose information on financial news and quotations, corporate-earning estimates, company disclosures, weekly economic survey updates, and Wall Street Journal highlights. Newsnet provides information from about 200 newsletters in 30 different industries; Dialog Information Services, BRS Bibliographic Retrieval Services, and Orbit Information Retrieval Services provide library information; and Westlaw provides legal information to its users. See Database. Electronic-Mail By means of electronic mail, terminals transmit documents such as letters, reports, and telexes to other computers or terminals. To gain access to these services, most terminals use a public network. Source Mail (available through The Source) and EMAIL (available through CompuServe) enable terminals to transmit documents to a host computer. The documents can then be retrieved by other terminals. MCI Mail Service and the U.S. Postal ECOM Service (also available through The Source) let terminals transmit documents to a computer in another city. The service then prints the documents and delivers them as hard copy. ITT Timetran, RCA Global Communications, and Western Union Easylink let terminals send telexes to other cities. Bulletin-Board By means of a bulletin board, terminals are able to facilitate exchanges and other transactions. Many bulletin boards do not charge a fee for their services. Users of these services simply exchange information on hobbies, buy and sell goods and services, and exchange computer programs. Ongoing Developments Certain telecommunication methods have become standard in the telecommunications industry as a whole, because if two devices use different standards they are unable to communicate properly. Standards are developed in two ways: (1) the method is so widely used that it comes to dominate; (2) the method is published by a standard-setting organization. The most important organization in this respect is the International Telecommunication Union, a specialized agency of the United Nations, and one of its operational entities, the International Telegraph and Telephone Consultative Committee (CCITT). Other organizations in the area of standards are the American National Standards Institute, the Institute of Electrical Engineers, and the Electronic Industries Association. One of the goals of these organizations is the full realization of the integrated services digital network (ISDN), which is projected to be capable of transmitting through a variety of media and at very high speeds both voice and nonvoice data around the world in digital form. Other developments in the industry are aimed at increasing the speed at which data can be transmitted. Improvements are being made continually in modems and in the communications networks. Some public data networks support transmission of 56,000 bits per second (bps), and modems for home use (see Microcomputer) are capable of as much as 28,800 bps. Introduction When a handful of American scientists installed the first node of a new computer network in the late 60's, they could not know by any chance what phenomenon they had launched. They were set a challenging task to develop and realise a completely new communication system that would be either fully damage-resistant or at least functional even if an essential part of it was in ruins, in case the Third World War started. The scientists did what they had been asked to. By 1972 there were thirty-seven nodes already installed and ARPANET (Advanced Research Projects Agency NET), as the system of the computer nodes was named, was working (Sterling 1993). Since those "ancient times", during which the network was used only for national academic and military purposes (Sterling 1993), much of the character of the network has changed. Its today users work in both commercial and non-commercial branches and not just in academic and governmental institutions. Nor is the network only national: it has expanded to many countries around the world, the network has become international and in that way it got its name. People call it Internet. The popularity of this new phenomenon is rising rapidly, almost beyond belief. In January 1994 there were an estimated 2 million computers linked to the Internet. However, this is nothing compared to the number from last year's statistics. At the end of 1995, 10 million computers with 40-50 million users were assumed to be connected to the network-of-networks. If it goes on like this, most personal computers will be wired to the network at the end of this century (Internet Society 1996). The Internet is phenomenal in many ways. One of them is that it connects people from different nations and cultures. The network enables them to communicate, exchange opinions and gain information from one another. As each country has its own national language, in order to communicate and make themselves understood in this multilingual environment the huge number Internet users need to share a knowledge of one particular language, a language that would function as a lingua franca. On the Internet, for various reasons, the lingua franca is English. Because of the large number of countries into which the Internet has spread and which bring with them a considerable variety of languages English, for its status of a global language, has become a necessary communication medium on the Internet. What is more, the position of English as the language of the network is strengthened by the explosive growth of the computer web as great numbers of new users are connecting to it every day. Internet, in computer science, an open interconnection of networks that enables connected computers to communicate directly. There is a global, public Internet and many smaller-scale, controlled-access internets, known as enterprise internets. In early 1995 more than 50,000 networks and 5 million computers were connected via the Internet, with a computer growth rate of about 9 percent per month. Services The public Internet supports thousands of operational and experimental services. Electronic mail (e-mail) allows a message to be sent from one computer to one or more other computers. Internet e-mail standards have become the means of interconnecting most of the world's e-mail systems. E-mail can also be used to create collaborative groups through the use of special e-mail accounts called reflectors, or exploders. Users with a common interest join a mailing list, or alias, and this account automatically distributes mail to all its members. The World Wide Web allows users to create and use point-and-click hypermedia presentations. These documents are linked across the Internet to form a vast repository of information that can be browsed easily. Gopher allows users to create and use computer file directories. This service is linked across the Internet to allow other users to browse files. File Transfer Protocol (FTP) allows users to transfer computer files easily between host computers. This is still the primary use of the Internet, especially for software distribution, and many public distribution sites exist. The Usenet service allows users to distribute news messages automatically among thousands of structured newsgroups. Telnet allows users to log in to another computer from a remote location. Simple Network Management Protocol (SNMP) allows almost any Internet object to be remotely monitored and controlled. Connection Internets are constructed using many kinds of electronic transport media, including optical fiber, telephone lines, satellite systems, and local area networks. They can connect almost any kind of computer or operating system, and they are self-aware of their capabilities. An internet is usually implemented using international standards collectively called Transmission Control Protocol/Internet Protocol (TCP/IP). The protocols are implemented in software running on the connected computer. Most computers connected to the internet are called hosts. Computers that route data, or data packets, to other computers are called routers. Networks and computers that are part of the global Internet possess unique registered addresses and obtain access from Internet service providers. There are four ways to connect to the public Internet: by host, network, terminal, or gateway access. Host access is usually done either with local area networks or with the use of telephone lines and modems combined with Internet software on a personal computer. Host access allows the attached computer to fully interact with any other attached computer-limited only by the bandwidth of the connection and the capability of the computer. Network access is similar to host access, but it is usually done via a leased telephone line that connects to a local or wide area network. All the attached computers can become Internet hosts. Terminal access is usually done via telephone lines and modems combined with terminal-emulation software on a personal computer. It allows interaction with another computer that is an Internet host. Gateway access is similar to terminal access but is provided via on-line or similar proprietary services, or other networks such as Bitnet, Fidonets, or UUCP nets that allow users minimally to exchange e-mail with the Internet. Development The Internet technology was developed principally by American computer scientist Vinton Cerf in 1973 as part of a United States Department of Defense Advanced Research Projects Agency (DARPA) project managed by American engineer Robert Kahn. In 1984 the development of the technology and the running of the network were turned over to the private sector and to government research and scientific agencies for further development. Since its inception, the Internet has continued to grow rapidly. In early 1995, access was available in 180 countries and there were more than 30 million users. It is expected that 100 million computers will be connected via the public Internet by 2000, and even more via enterprise internets. The technology and the Internet have supported global collaboration among people and organizations, information sharing, network innovations, and rapid business transactions. The development of the World Wide Web is fueling the introduction of new business tools and uses that may lead to billions of dollars worth of business transactions on the Internet in the future. In the Internet nowadays, the majority of computers are from the commercial sphere (Vrabec 1996). In fact, the commercialisation of the network, which has been taking place during the last three or four years, has caused the recent boom of the network, of the WWW service in particular (Vrabec 1996). It all started in the network's homeland in 1986, when ARPANET was gradually replaced by a newer and technologically better built network called NSFNET. This network was more open to private and commercial organisations (Vrabec 1996) which, realising the potential of the possible commercial use of the Internet, started to connect themselves to the network. There are several possibilities how commercial organisations can benefit from their connection to the English-speaking Internet. Internet users are supposed to be able to speak and understand English, and actually most of them do. With the rapidly rising number of users, the network is a potential world market (Vrabec 1996) and English will be its important tool. The status of English as a world language, or rather its large number of people who are able to process and use information in English, already enables commercial organisations to present themselves, their work and their products on the Internet. Thanks to English and the Internet companies can correspond with their partners abroad, respond to any question or give advice on any problem that their international customers can have with their products almost immediately (Vrabec 1996). Considering the fact that many of the biggest, economically strongest and influential organisations are from the USA or other native English speaking countries, the commercialisation has very much reinforced the use of English on the Internet. BIBLIOGRAPHY: Cepek, Ales and Vrabec, Vladimir 1995 Internet :-) CZ, Praha, Grada Demel, Jiri 1995 Internet pro zacatecniky, Praha, NEKLAN Falk, Bennett 1994 InternetROADMAP, translated by David Krásensk?, Praha, Computer Press Jenkins, Simon 1995 "The Triumph Of English" The Times, May 1995 Philipson, Robert 1992 Linguistic imperialism, Oxford, Oxford University Press Schmidt, Jan 1996 "Carka , hacek a WWW" Computer Echo Vol. 3/6 (also available on http://omicron.felk.cvut.cz/~comecho/ce/journal.html) Sterling, Bruce 1993 "A short history of the Internet" The magazine Of Fantasy And Science Fiction, Feb. 1993 Vrabec, Vladimir 1996 "Komerce na Internetu" LanCom, Vol. 4/3 f:\12000 essays\technology & computers (295)\Telecommuting.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Telecommuting is a very interesting and complex subject. The pros and cons of this concept are numerous and both sides have excellent arguments. In the research I've done I feel I have to argue both sides to maintain a sense of perspective. I had mixed feelings about telecommuting before I started this research and I find that this is something many others have in common with me. The reasons for and against telecommuting can be complex or simple depending on which view point you take. From a manager's view point telecommuting is a very dangerous undertaking that requires a high readiness level on the employee's part. Allowing an employee with a low (R1, or R2) readiness level to telecommute is not likely to result in a positive manner. When an employee has a high readiness level and a definite desire to attempt working in the home, for some reason or another, many factors should be considered. What kind of schedule does the employee feel constitutes telecommuting? Generally speaking, telecommuting is defined as spending at least one day out of a five day work week working in the home. Is one day home enough for the employee? Or, too little? How does the employer decide how many days to allow? Does the employee's job lend itself well to telecommuting? Some jobs, obviously, can't be accomplished using a telecommuting format. Does the employee have a good track record for working unsupervised? This relates back to readiness levels. An employee who isn't performing at a high readiness level should not even be considered as a candidate for telecommuting. All of these questions and many more must be answered on a case by case basis. This particular venture into creative scheduling has its ups and downs as well from an employee's point of view. It can be quite a bed of roses for both employee and employer. A lot of nice smells and pretty sights, but watch out for the thorns. In several studies I reviewed I noticed that the telecommuting population loses many of the basics of the social contacts associated with the office environment. Judging the correct amount of time that an employee should spend working at home in relation to working at the office can have a significant impact on both performance and satisfaction. It's usually hard for someone to completely cut themselves off from their work environment and still perform well. The sense of being out of touch with the others in the work force can be mitigated by the use of e-mail, teleconferencing, and the ever faithful telephone. These devices, in a best case scenario, can completely substitute for face to face interaction. That's a strong statement and I would like to explain a few conditions. The best case scenario assumes an individual is at a very high readiness level and has very little perceived need for social interaction with the other office employees. In a worst case scenario an employee can lose touch with the pulse of the office, lose motivation, and their readiness level could drop. This type of scenario is likely to get out of hand if the employee is never in the office to receive the appropriate feedback. It sounds as if I'm not really impressed with telecommuting but that's not true. Let's look at a few of the really solid benefits for the employer. The employer can offer telecommuting as an option for prospective employees to improve recruitment. The current employees could be offered it to keep them around. Saving one employee could save the company a large amount of money. "Most employers don't keep accurate records of the costs of losing good employees and finding and retraining replacements, but there have been estimates ranging from $30,000 to over $100,000 to replace a professional." The ever present crunch for space could drive a company to reduce the amount of office space it requires. Telecommuting makes the employee provide his own office space. It's been shown that telecommuting does increase productivity with typical increases in the 15 to 25 percent range. These gains may come from the significantly less time a person spends at the company water cooler. A company can improve customer service by making use of telecommuters. It would cost much less to have a few people answering phones at home at 3 o'clock in the morning than running a skeleton crew in a heated/air-conditioned, lighted, and such office building. So what's in it for the employee? That depends mostly on which particular employee we are referring too. Telecommuting allows someone with a physical handicap that could not actually commute to the workplace to still function as a valuable employee. It would allow someone who has small children and feels a great need to be home for them to still work and have a career. The distance an employee must travel daily to work is a factor that can induce great amounts of frustration and expense into their lives. Telecommuting can alleviate this stress. Job satisfaction can be enhanced by allowing greater freedom and bestowing greater responsibility. Employees should be aware of some of the pitfalls of telecommuting as well as the benefits. It is estimated that telecommuters earn less overall then office workers. As a general rule a professional telecommuter will earn approximately 91% of the wage of an office working professional and clerical workers. All of these considerations must factor into a decision by a company to implement a telecommuting program. Many factors must be taken into account and clear organizational goals must be stated. It is vitally important for the management to support the program and for a great degree of trust to exist between employer and employee. Implementation of a pilot program can take years and involve many aspects of the company as a whole. On the whole, I am impressed with the possibilities that telecommuting presents and daunted by the problems that can crop up. I feel that a well thought out, carefully planned, and conscientiously applied program can benefit most companies in most situations. I don't feel that telecommuting is for every company but it could certainly benefit many. Bibliography 1. Byte Magazine, May 91, Vol. 16 Issue 5, "Is it Time to Telecommute?", Don Crabb, et al. 2. Compute! Magazine, Oct. 91, Vol. 13 Issue 10, "Workplace", D. Janal 3. The New Era of Home Based Work: Directions and Policies, Kathleen E. Christensen, WestView Press, 1988 4. Telecommuting: The Organizational and Behavioral Effects of Working at Home, Reagan Mays Ramsower, UMI Research Press, 1985 f:\12000 essays\technology & computers (295)\Telephones.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ TELEPHONES About 100 years ago, Alexander Graham Bell invented the telephone by accident with his assistant Mr. Watson. Over many years, the modern version of the telephone makes the one that Bell invented look like a piece of junk. Developments in tone dialing, call tracing, music on hold, and electronic ringers have greatly changed the telephone. This marvelous invention allows us to communicate with the entire globe 24 hours a day just by punching in a simple telephone number. It is the most used piece of electronic apparatus in the world. It is probably one of the most easy to use electronics available too. All you have to do is pick up the receiver, listen for the tone, and then select a number using either tone or pulsing dial. A telephone can be separated into two main categories: there is the tone (touch tone) or the older rotary dial (pulse) telephones. Then you can divide those into other categories such as business line (multi -- line) or home line (single line). You can also have many other types of phones: there are those that hang on the wall, on the desk, etc. THE HANDSET No matter what kind of telephone you own, there has to be some device that allows you to talk to and listen to. This device is called the handset. The handset is usually made out of plastic and inside it are two main components: the transmitter and the receiver. THE TRANSMITTER It is the job of the transmitter to turn the air pressure created by your sound waves to electrical signals so they can be sent to the other telephone. The waves hit a thin skin called the diaphragm that is physically connected to a reservoir of carbon granules. When the pressure hits the diaphragm, it shakes up the carbon granules. Then the carbon expands and contracts, depending on what force is exerted. At two points on the outer shell of the reservoir of the carbon are two outlets of electricity from the talk battery. By applying voltage, a current is made and is passed along the lines to the waiting telephone. At the other end the current is transformed back to speech. THE RECEIVER The receiver turns an ever varying current back to speech. A permanently magnetized soft iron core is covered in many turns of very fine wire. Through the wire, the electrical current is applied. The currents attract and repel an iron diaphragm. By the vibrating actions the diaphragm does, a different pressure is created and these pressures are translated by ear into intelligible speech. TELEPHONE NETWORKS If you have ever opened up a phone (do not try this at home, you might screw it up) you will probably see a PC (printed circuit) board. The board contains the needed electronics for the phone to work properly. In older models of a working telephone, this board may look like an electronic box. This board is called the telephone network. The telephone network's function is to provide all the necessary components and termination points (screw on or push on terminals). The components and the termination points connect and match the impedance of a handset (transmitter and receiver) to a two -- wire telephone circuit. Every component in the telephone has to be connected to the PC board. Usually, the board is the most reliable component inside the phone. All the delicate components are securely sealed by a metal enclosure. The PC board is a very fragile object and can be broken easily. If you look closely, you can see wires poking out of the board. The wires are soldered to the terminal legs. If you break one of those wires, man are you dead! TELEPHONE HOOK SWITCH Every time you talk over a line, you always need to disconnect. The most simple thing to do is to let the handset sit down. While sitting down, the handset can give force to a spring loaded operating arm, which is connected to a number of switch contacts. When this happens, the phone disconnects. THE PHONE RINGER Once a call has been dialed through, the telephone of the person being called must be given some kind of signal to let him/her know that he/she has been called. This is when the telephone rings. This type of signal is generated using an alternating current somewhere between 90 to 220 V with a frequency of 30 Hz. But what if you have 5 or 6 phones connected on a party line? How can you signal one telephone to ring? The answer is by frequency selection. Older telephones had a different capacitor and ringer coil impedance values. It was these small differences that made the bell select one frequency. For example, if you have 5 telephones on one party line. If a call came through for line 1, the central board would send 10 Hz (this is a guess) signal to the party line. Line 1 would ring and all the others would remain quiet. If the call was for line 5, the central board would send a 50 Hz (this is also a guess) signal so that only line 5 would ring. The phone rings by applying voltage where needed, a resonant circuit is made. The xx Hz signal would make a magnetic field around a device called the hammer. The hammer is attracted and then repelled by the constant changing of the magnetic field. If two gongs were placed on either side of the hammer, the hammer would strike each gong successively. Other phones can use a one gong system. This system is like the two gong system, but more compact. Due to the compact in size, this ringer is perfect for small wall phones or such. THE ROTARY DIAL (PULSE) A rotary dial creates equally spaced make -- and -- break pulses according to how far the plastic dialing plate goes. A good description of a dialing plate is like this: it has regularly spaced holes to dial with and a metal object called the finger stop. That makes the number you want to go to as easy as 1-2-3. Each hole in the wheel represents one number 1 through to 10. By using some small gears and a device that times the velocity of the return of the finger wheel after you have dialed, the internal switches are opened and closed at a rate of 1 pulse per second. The number of pulses created is determined by how far the finger wheel has gone around before being stopped by the finger stop. Let's say that you dial the number 5, that means the internal switches open and close 5 times before the finger stop stops it. During the dial, a second set of switches remain closed and stay that way for the entire time that you are dialing. The purpose for this second set of switches is to keep the telephone receiver short for the whole dialing period. If this switch was not there, you would hear loud and frustrating clicks in the receiver. TONE DIALING There is also an alternative to the pulse dial, that is the tone dial. Phones that use tone dialing are made with a piece of machinery that makes tones on the phone line. These tones are transformed by the central board into numbers. The act of putting an audio signal on the telephone line as a dialing utility is called the DTMF (dual tone multi -- frequency) dialing. It is called this because the tone dial makes a combination of two tones. These audio signals are made by a mixture of both high and low frequencies. When a button on the dial pad is pushed, vertical and horizontal tones are combined to make a signal. It is this newly made tone that is sent down the central board and then transformed back to the number. TELEPHONE CORDS Older telephone lines were made of fork shaped piece of metal attached to wires with a tool called the crimper. When installed, these wires were screwed into the terminal box on the wall. This is really a pain in the rear end because if you are going to fix the phone, you have to unscrew the box, then all the screws. This process could last for hours at a time. To make this job a lot easier, coiled cords and modular lines were invented. To take out the handset or telephone, all you have to do is to unplug the modular connector from its match and that is it. Modular cords can be bought nearly in any electronics store. There are three kinds of cords. One is the full modular cord. There are small modular clips on both ends of the cord. The second is the one mentioned in the first paragraph, this is called the spade -- lug cord. The third one is called the 1/4 modular, this cord has one modular connector on one side and the old fashioned spade -- lug end on the other. These 1/4 cords are not very common. BIBLIOGRAPHY BOOK: THE TALKING TELEPHONE AUTHOR: STEVE SOKOLOWSKI PUBLISHER: TAB BOOKS NOV. 1991 f:\12000 essays\technology & computers (295)\TELNET.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ TELNET PURPOSE OF THIS REPORT Before gophers, hypertext, and sophisticated web browsers, telnet was the primary means by which computer users connected their machines with other computers around the world. Telnet is a plain ASCII terminal emulation protocol that is still used to access a variety of information sources, most notably libraries and local BBS's. This report will trace the history and usage of this still popular and widely used protocol and explain where and how it still manages to fit in today. HISTORY AND FUTURE OF TELNET "Telnet" is the accepted name of the Internet protocol and the command name on UNIX systems for a type of terminal emulation program which allows users to log into remote computer networks, whether the network being targeted for login is physically in the next room or halfway around the globe. A common program feature is the ability to emulate several diverse types of terminals--ANSI, TTY, vt52, and more. In the early days of networking some ten to fifteen years ago, the "internet" more or less consisted of telnet, FTP (file transfer protocol), crude email programs, and news reading. Telnet made library catalogs, online services, bulletin boards, databases and other network services available to casual computer users, although not with the friendly graphic user interfaces one sees today. Each of the early internet functions could be invoked from the UNIX prompt, however, each of them used a different client program with its own unique problems. Internet software has since greatly matured, with modern web browsers (i.e. Netscape and Internet Explorer) easily handling the WWW protocol (http) along with the protocols for FTP, gopher, news, and email. Only the telnet protocol to this day requires the use of an external program. Due to problems with printing and saving and the primitive look and feel of telnet connections, a movement is underway to transform information resources from telnet-accessible sites to full fledged web sites. However, it is estimated that it will still take several years before quality web interfaces exist for all of the resources now currently available only via telnet. Therefore, knowing the underlying command structure of terminal emulation programs like telnet is likely to remain necessary for the networking professional for some time to come. ADVANTAGES AND DISADVANTAGES OF TELNET The chief advantage to the telnet protocol today lies in the fact that many services and most library catalogs on the Internet remain accessible today only via the telnet connection. Since telnet is a terminal application, many see it as a mere holdover from the days of mainframe computers and minicomputers. With the recent interest in $500 Internet terminals may foretell a resurgence in this business. Disadvantages include the aforementioned problems that telnet tends to have printing and saving files, and its primitive look and feel when compared to more modern web browsers. OTHER APPROACHES The functionality of the telnet protocol may be compared with the UNIX "rlogin" command, an older remote command that still has some utility today. Rlogin is a protocol invoked by users with accounts on two different UNIX machines, allowing connections for certain specified users without a password. This requires setting up a ".rhosts" or "/etc/hosts.equiv" file and may involve some security risks, so caution is advised. Using telnet instead of the rlogin command will accomplish the same results, but the use of the rlogin command will have the effect of saving keystrokes, particularly if it is used in conjunction with an alias. CONCLUSION Some argue that the future of the Internet lies in sophisticated web browsers like Netscape and Internet Explorer, or tools such as Gopher that "save" end users from having to deal with the command line prompt and the peculiar details of commands like Telnet. While that may be the case, the tendency remains in place for programmers to develop new software by building on the old. Therefore, knowing the underlying command structure of older protocols like telnet and rlogin are likely to remain essential skills for the networking professional in the forseeable future. f:\12000 essays\technology & computers (295)\The Antitrust Case Against Microsoft.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Anti-Trust Case Against Microsoft Since 1990, a battle has raged in United States courts between the United States government and the Microsoft Corporation out of Redmond, Washington, headed by Bill Gates. What is at stake is money. The federal government maintains that Microsoft's monopolistic practices are harmful to United States citizens, creating higher prices and potentially downgrading software quality, and should therefore be stopped, while Microsoft and its supporters claim that they are not breaking any laws, and are just doing good business. Microsoft's antitrust problems began for them in the early months of 1990(Check 1), when the Federal Trade Commission began investigating them for possible violations of the Sherman and Clayton Antitrust Acts,(Maldoom 1) which are designed to stop the formation of monopolies. The investigation continued on for the next three years without resolve, until Novell, maker of DR-DOS, a competitor of Microsoft's MS-DOS, filed a complaint with the Competition Directorate of the European Commission in June of 1993. (Maldoom 1) Doing this stalled the investigations even more, until finally in August of 1993, (Check 1)the Federal Trade Commission decided to hand the case over to the Department of Justice. The Department of Justice moved quickly, with Anne K. Bingaman, head of the Antitrust Division of the DOJ, leading the way.(Check 1) The case was finally ended on July 15, 1994, with Microsoft signing a consent settlement.(Check 1) The settlement focused on Microsoft's selling practices with computer manufacturers. Up until now, Microsoft would sell MS-DOS and Microsoft's other operating systems to original equipment manufacturers (OEM's) at a 60% discount if that OEM agreed to pay a royalty to Microsoft for every single computer that they sold (Check 2) regardless if it had a Microsoft operating system installed on it or not. After the settlement, Microsoft would be forced to sell their operating systems according to the number of computers shipped with a Microsoft operating system installed, and not for computers that ran other operating systems. (Check 2) Another practice that the Justice Department accused Microsoft of was that Microsoft would specify a minimum number of minimum number of operating systems that the retailer had to buy, thus eliminating any chance for another operating system vendor to get their system installed until the retailer had installed all of the Microsoft operating systems that it had installed.(Maldoom 2) In addition to specifying a minimum number of operating systems that a vendor had to buy, Microsoft also would sign contracts with the vendors for long periods of time such as two or three years. In order for a new operating system to gain popularity, it would have to do so quickly, in order to show potential buyers that it was worth something. With Microsoft signing long term contracts, they eliminated the chance for a new operating system to gain the popularity needed, quickly.(Maldoom 2) Probably the second most controversial issue, besides the per processor agreement, was Microsoft's practice of tying. Tying was a practice in which Microsoft would use their leverage in one market area, such as graphical user interfaces, to gain leverage in another market, such as operating systems, where they may have competition.(Maldoom 2) In the preceding example, Microsoft would use their graphical user interface, Windows, to sell their operating system, DOS, by offering discounts to manufacturers that purchased both MS-DOS and Windows, and threatening to not sell Windows to companies who did not also purchase DOS. In the end, Microsoft decided to suck it up and sign the settlement agreement. In signing the agreement, Microsoft did not actually have to admit to any of the alleged charges, but were able to escape any type of formal punishment such as fines and the like. The settlement that Microsoft agreed to prohibits it, for the next six and a half years from: -Charging for its operating system on the basis of computer shipped rather than on copies of MS-DOS shipped; -Imposing minimum quantity commitments on manufacturers; -Signing contracts for greater than one year; -Tying the sale of MS_DOS to the sale of other Microsoft products;(Maldoom 1) Although these penalties look to put an end to all of Microsoft's evil practices, some people think that they are not harsh enough and that Microsoft should have been split up to put a stop to any chance of them forming a true monopoly of the operating system market and of the entire software market. On one side of the issue, there are the people who feel that Microsoft should be left alone, at least for the time being. I am one of these people, feeling that Microsoft does more good than bad, thus not necessitating their breakup. I feel this way for many reasons, and until Microsoft does something terribly wrong or illegal, my opinion will stay this way. First and foremost, Microsoft sets standards for the rest of the industry to follow. Jesse Berst, editorial director of Windows Watcher newsletter out of Redmond, Washington, and the executive director of the Windows Solutions Conference, says it best with this statement: "To use a railroad analogy, Microsoft builds the tracks on which the rest of the industry ships its products." ("Why Microsoft (Mostly) Shouldn't Be Stopped." 4) With Microsoft creating the standards for the rest of the computer industry, they are able to create better standards and build them much faster than if an outside organization or committee were to create them. With these standards set, other companies are able to create their applications and other products that much faster, and better, and thus the customers receive that much better of a product. Take for instance the current effort to develop the Digital Video Disc (DVD) standard. DVD's are compact discs that are capable of storing 4900 megabytes of information as apposed to the 650 megabytes that can be stored on a CD-ROM disc now. For this reason, DVD's have enormous possibilities in both the computer industry and in the movie industry. For about the last year, companies such as Sony, Mitsubishi, and other prominent electronics manufacturers have been trying to decide on a set of standards for the DVD format. Unfortunately, these standards meetings have gone nowhere, and subsequently, many of the companies have broken off in different directions, trying to develop their own standards. In the end, there won't be one, definite standard, but instead, many standards, all of which are very different from one another. Consumers will be forced to make a decision on which standard to choose, and if they pick the wrong one, they could be stuck down the road with a DVD player that is worthless. Had only one company set the standards, much like Microsoft has in the software business, there wouldn't be the confusion that arose, and the consumers could sit back and relax, knowing that the DVD format is secure and won't be changed. Another conclusion that many anti-Microsoft people and other people around the world jump to is that the moment that we have a company, such as Microsoft, who is very successful, they immediately think that there must be something wrong; they have to be doing something illegal or immoral to have become this immense. This is not the case. Contrary to popular belief, Microsoft has not gained its enormous popularity through monopolistic and illegal measures, but instead through superior products. I feel that people do have brains, and therefore have the capacity to make rational decisions based on what they think is right. If people didn't like the Microsoft operating systems, there are about a hundred other choices for operating systems, all of which have the ability to replace Microsoft if the people wanted them. But they don't, the people for the most part want Microsoft operating systems. For this reason, I don't take the excuse that Microsoft has gained their popularity through illegal measures. They simply created products that the people liked, and the people bought them. On the other side of the issue, are the people who believe that Microsoft is indeed operating in a monopolistic manner and therefore, the government should intervene and split Microsoft up. Those who are under the assumption that Microsoft should indeed be split up, believe that they should either be split into two separate companies: one dealing with operating systems and the other dealing strictly with applications. The other group believes that the government should further split Microsoft up into three divisions: one company to create operating systems, one company to create office applications, and one company to create applications for the home. All of these people agree that Microsoft should be split up, anyway possible. The first thing that proponents of Microsoft being split up argue that although Microsoft has created all kinds of standards for the computer software industry, in today's world, we don't necessarily need standards. Competing technologies can coexist in today's society, without the need for standards set by an external body or by a lone company such as Microsoft. A good analogy for this position is given in the paper, "A Case Against Microsoft: Myth Number 4." In this article, the author states that people who think that we need such standards, give the example of the home video cassette industry of the late 1970's. He says that these people point out that in the battle between the VHS and Beta video formats, VHS won not because it was a superior product, but because it was more successfully marketed. He then goes to point out that buying an operating system for a computer is nothing at all like purchasing a VCR, because the operating system of a computer defines that computer's personality, whereas a VCR's only function is to play movies, and both VHS and Beta do the job equally. Also, with the development of camcorders, there have been the introduction of many new formats for video tapes that are all being used at once. VHS-C, S-VHS and 8mm formats all are coexisting together in the camcorder market, showing that maybe in our society today, we are not in need of one standard. Maybe we can get along just as well with more than one standard. Along the same lines, there are quite a few other industries that can get along without one standard. Take for instance the automobile industry. If you accepted the idea that one standard was best for everyone involved, then you would never be tempted to purchase a BMW, Lexus, Infiniti, Saab or Porsche automobile, due to the fact that these cars all have less than one percent market share in the automobile industry and therefore will never be standards. Probably the biggest proponent of government intervention into the Microsoft issue is Netscape Communications, based out of Mountain View, California. Netscape has filed law suits accusing Microsoft of tying again.("Netscape's Complaint against MicroSoft." 2) This time, Microsoft is bundling their world wide web browser, Internet Explorer 3.0 into their operating system, Windows 95. Netscape is the maker of Netscape Navigator, currently the most widely used internet browser on the market, and now, facing some fierce competition from Microsoft's Internet Explorer. Netscape says that in addition to bundling the browser, Microsoft was offering Windows at a discount to original equipment manufacturers (OEM's),("Netscape's Complaint against MicroSoft." 2) to feature Internet Explorer on the desktop of the computers that they shipped, thus eliminating any competition for space on the desktop by rival companies such as Netscape. If the OEM wants to give the consumer a fair and even choice of browsers by placing competitors' browser icons in a comparable place on the desktop, Netscape has been informed that the OEM must pay $3 more for Windows 95 than an OEM that takes the Windows bundle as is and agrees to make the competitors' browsers far less accessible and useful to customers.("Netscape's Complaint against MicroSoft." 2) Another accusation that Netscape is making against Microsoft is that they are doing the same type of things with the large internet service providers of the nation. They are offering the large internet providers of the nation, such as Netcom and AT&T, space on the Windows 95 desktop, in return for the internet provider's consent that they will not offer Netscape Navigator, or any other competing internet software to their customers.("Netscape's Complaint against MicroSoft." 3) Netscape is becoming ever more concerned with Microsoft's practices, because for now, they are going untouched by the government and it looks as if it will stay that way for quite some time now. The are very much worried, as they watch the numbers of users switching to Microsoft's browser, and the number of users using Navigator slipping. Besides all of the accusations of monopolistic actions Netscape lay down on them, Microsoft does seem to have one advantage when it comes to the browser wars. Their new browser, version 3.0, matches Netscape's feature for feature, with one added plus: it is free and Microsoft says that it always free. So is their internet server, Internet Information Server. Whereas Netscape charges $50 and $1500 for their browser and their web server, respectively.("Netscape's Complaint against MicroSoft." 3) With all the information that has been presented for both sides of the issue, you are probably left in a daze, not knowing what to think. Is Microsoft good? Or is Microsoft bad? Well, the answer is a little bit of both. Even though the Justice Department found that Microsoft might be practicing some techniques that are less than ethical, they did not find that Microsoft was breaking any anti-trust laws, nor did Microsoft actually admit to the accusations when they signed the agreement. If anything, them signing the agreement was more of a sorry than an full fledged admission of guilt. Other people might disagree with me, and there might be a lot of allegations floating around from different companies, but the fact of the matter is plain and simple. Microsoft has not been formerly charged and found guilty of any illegal practices pertaining to them being a monopoly. I believe that the government should stay out of the affairs of the economy, rather than get tangled up in a mess, and just end up deadlocked like the FTC did in 1990. And even if the government did get involved, due to the extremely fast paced nature of the computer industry, and the extremely slow nature of the government, there may not be any resolve for quite a while. f:\12000 essays\technology & computers (295)\The basics of a hard drive.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I'm sure, by now you have used a computer be it to play games or write a paper. But do you know how a computer works and runs all the programs you what it to? Well if not I will tell you. To begin with I will explain a little about the history about the computers history. About 50 years or maybe a little longer someone came up with the thought that all the boring stuff like math could be automated so humans would not have to do it all. Hence the computer, as to who exactly I could not tell you. That person than began to work with his Idea and figured out that if he could turn a machine on and off at a specified time for a specified time he could in a way alter what it could do. To turn it on and off he came up with a very interesting way, he used a sheet that looked almost like a scantron sheet but with holes and those holes where used to turn it on and off. The holes represented 1s and the noholes 0s. the 1s turned it on an the 0s turned it off. With this knowledge he began to make little programs that could solve math problems. I guess he must have gotten bored with the math or something because he came up with a way to let him play tic-tack toe with the computer, which by the way was the first came ever to be created on the computer. Now there is one more thing you have to know about this computer, the computer was half the size of West High Schools gym. And it was thought that when it was ecomoical for people to own there own computer it would fill a decent size room. Could you imagine a computer filling up your entire living room, where wolud you put your TV? But with the invetion of keyboards and nanotechnology they reduced the size of the computer by nearly 200% and every year the keep getting smaller and smaller and it is estimated that nearly 85 to 90 percent of American homes have at least one computer in their home. Now that I have bored you to death with the history of computers here's the fun stuff. Programs that let you play games and surf the net aren't just ideas put in a niffty little box and sold. They are ideas put on paper then translated into a really, really huge math problem that the computer can understand, after all the computer was invented to do math problems, by people called programers. From there the computer further translates the math problems into 1s and 0s which in turn translates into the image you see on the computer screen. And all this is stored on a little thing called a hard drive. Now before I go to far into depth on this topic, imagine a city block with excatly 1000 houses on it and every house can only store so much, so when that house fills up the house next it the last one fills up and being the nice people they are let the computer pull any thing it needs out of the houses to use and when its done with the stuff it puts it back in the same house. The process described above is what makes up a hard drive. f:\12000 essays\technology & computers (295)\The Central Processing Unit.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Central Processing Unit Microprocessors, also called central processing units (CPUs), are frequently described as the "brains" of a computer, because they act as the central control for the processing of data in personal computers (PCs) and other computers. Chipsets perform logic functions in computers based on Intel processors. Motherboards combine Intel microprocessors and chipsets to form the basic subsystem of a PC. Because it's part of every one of your computer's functions, it takes a fast processor to make a fast PC. These processors are all made of transistors. The first transistor was created in 1947 by a team of scientists at Bell Laboratories in New Jersey. Ever since 1947 transistors have shrunk dramitically in size enabling more and more to be placed on each single chip. The transistor was not the only thing that had to be developed before a true CPU could be produced. There also had to be some type of surface to assemble the transistors together on. The first chip made of semiconducitve material or silicon was invented in 1958 by Jack Kilby of Texas Instruments. Now we have the major elements needed to produce a CPU. In 1965 a company by the name of Intel was formed and they began to produce CPU's shortly thereafter. Gordon Moore, one of the founders of Intel, predicted that the number of transistor placed on each CPU would double every 18 months or so. This sounds almost impossible, however this has been a very accutate estimation of the evolution of CPUs. Intel introduced their first processor, a 4004, in November of 1971. This first processor had a clock speed of 108 kilohertz and 2,300 transistors. It was used mainly for simple arithmetic manipulation such as in a calculator. Ever since this first processor was introduced the market has done nothing but soared to unbelievable highs. The first processor common in personal computers was the 8088. This processor was introduced in June of 1978. It could be purchased in three different clock speeds starting at 5 Megahertz and going up to 10 Megahertz. This CPU had 29,000 transistors. Then came the 80286 and 80386 processors. The 386 was the first processor to be introduced in the DX, SX, and SL versions. Next came the 80486 processors of which there were even more choices here. The first 486 processor had 1,200,000 transistors and the latest have 1.4 million transistors. There clock speeds varied any where from 16 MHz on the first ones to 100 MHz on the most recent 486 processors. Some of which are still in use in homes all around the country. Next came the Pentium Processor, March 1993, running at clock speeds of 60 & 66 Mhz. These first pentium processors had 3.1 million transistors, and had a 32-bit data path. Now the pentium processor range anywhere from 90 MHz to 200 MHz and are the most widely used processor today. Intel is currently producing two new pentium processors with MMX technology. These two processors, running at 166 & 200 MHz, are made to accelarate graphics and multimedia software packages. Currently the newest processor to be introduced in a 400 MHz processor made also by Intel. This new processor illustrates the performance potential of the new P6 architecture. It contains 7.5 million transistors and also includes the new MMX technology. f:\12000 essays\technology & computers (295)\The Changing Role of the Database Administrator.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ March 1996 Technology Changes Roll of Database Administrator The database administrator (DBA) is responsible for managing and coordinating all database activities. The DBA's job description includes database design, user coordination, backup, recovery, overall performance, and database security. The database administrator plays a crucial role in managing data for the employer. In the past the DBA job has required sharp technical skills along with management ability. (Shelly, Cashman, Waggoner 1992). However, the arrival on the scene of the relational database along with the rapidly changing technology has modified the database administrator's role. This has required organizations to vary the way of handling database management. (Mullins 1995) Traditional database design and data access were complicated. The database administrator's job was to oversee any and all database-oriented tasks. This included database design and implementation, installation, upgrade, SQL analysis and advice for application developers.. The DBA was also responsible for back-up and recovery, which required many complex utility programs that run in a specified order. This was a time-consuming energy draining task. (Fosdick 1995) Databases are currently in the process of integration. Standardizing data, once done predominately by large corporations, is now filtering down to medium-size and small companies. The meshing of the old and new database causes administrators to maintain two or three database products on a single network. (Wong 1995) Relational database management systems incorporate complex features and components to help with logic procedures. This requires organizations to expand the traditional approach to database management and administration. The modern database management systems not only share data, they implement the sharing of common data elements and code elements. (Mullins 1995) Currently, the more sought after relational database products are incorporating more and more complex features and components to simplify procedural logic. Due to the complexity of todays relational database, corporations are changing the established way of dealing with database management personnel. Traditionally, as new features were added to the database, more and more responsibility fell on the DBA. With the emergence of the relational database management system (RDBMS), we are now beginning to see a change in the database administrator's role.(Mullins 1995) The design of data access routines in relational database demands extra participation from programmers. The database administrator simply checks the system's optimization choice, because technology is responsible for building access paths to the data. Program design and standard query language (SQL) tools have become essential requirements for the database administrator to do this job. However, this technology requires additional supervision and many DBAs are not competent in SQL analysis and performance monitoring. The database administrator had to learn to master the skills of application logic and programming techniques. (Mullins 1995) The database administrator's job description and responsibilities have changed with technology. The DBA is greatly concerned with database quality, maintenance and availability . If the relational database fails to perform, the database administrator will be held accountable for the failure. The role of the database administrator is expanding to include too many responsibilities for a single person. This has led to the DBA's job being split into two separate titles: a traditional DBA along with a procedural DBA. The traditional database administrator is responsible for organizing and managing data objects. However, with new technology, the DBA is not always responsible for debugging, utilities or programming in C, COBOL or SQL. (Mullins 1995). These tasks go to object builder programming personnel who are familiar with object-oriented programming languages. With the database manager unqualified in SQL, the job is referred to object builders well versed in using C, COBOL, SQL. (Sipolt 1995). The traditional database administrator's strength is in creating the physical design of the database. The procedural database administrator is an expert in accessing data. Procedural DBAs are responsible for procedural logic support, application code reviews, access path review and analysis, SQL rewrites, debugging, and analysis to assure optimal execution. (Mullins 1995) Along with the changing job description, administrators are facing increased demands from the corporations for which they work. Database administrators are responsible for staff cost control, hardware, software, and are becoming increasingly responsible for the work quality and response time of their staff. (Riggsbee 1995) The job modifications are not the only change in this industry. Database administrators received a substantial increase in their wages in 1995. The average earnings for a DBA are now $52,572 according to the 1995 survey source. However, salaries differ according to the specific region of the country in which one resides. The mid-level database administrator in San Francisco earns $55,000 to $65,000, substantially more than our survey states. However, Salt Lake City database administrator's salary fell between $30,000 to $35,000. Another area of salaries on the rise is the health care profession. Previously lower end on the pay scale, hospital pay is on the rise and currently mid-scale in the market. (Mael 1995) Companies no longer feel responsible for additional training or long-term retention of an employee. The trend is currently opting for a new employee, rather than hiring from within the company. Companies are willing to compensate new blood for their knowledge, rather than invest time and effort in training. This cold hard fact is true from the top management down to data entry. Therefore, it is vital to individual database personnel to make sure they are receiving the proper training to prepare them for our rapidly changing technology world. (Mael 1995) The database administrator's roll has become ambiguous. Therefore, the job description has been separated into two fields. The traditional database administratori is responsible for managing and the organization of data. He is no longer responsible for programming in C, COBOL, or SQL. Traditional database administration personnel create the physical data design. The task of procedural database administrator encompasses logic support, coding review, programming in SQL, C, or COBOL. The procedural database administrator's expertise is in data access. Our world of rapidly changing technology has placed greater demands on database administration personnel. Relational database demanded modification of the database administraton into two separate specialities. This change should result in the traditional database administrator maintaining a managerial capacity, with responsibilities in the physical design of the database. The procedural database administrator's capacity is in the more technical aspects of building the relational database. He expertise in procedural logic support, data access path review and analysis demands superior performance of the relational database. Works Cited Fosdick, Howard. "Managing Distributed Database Servers" Database Programming & Design, Dec. 1995, p. 533-537. Mael, Susan. " Want to Earn Big Money? West or Become CIO" , Datamation . Oct 1, 1995, p45-49. Mullins, Craig S., "The Procedural DBA", Database Programming & Design, Dec 1995, p. 40-47. Riggsbee, Max. "Database Support: Can It Be Measured?", Database Programming & Design. July 1995, p. 32-37. Shelly, Cashman, Waggoner., Complete Computer Concepts and Programming in Microsoft Basic., Massachusetts. Boyd & Frazer Publishing Company, 1992. Sipolt, Michael J., "An Object Lesson In Management (Excerpt from 'The Object- Oriented Enterprise')", Datamation. July 1, 1995, p. 51-54. Wong, William, "Database Integration", Network VAR, Nov 1995, p. 31-37. f:\12000 essays\technology & computers (295)\The Communications Decency Act.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The U.S. Government should not attempt to place restrictions on the internet. The Internet does not belong to the United States and it is not our responsibility to save the world, so why are we attempting to regulate something that belongs to the world? The Telecommunications Reform Act has done exactly that, put regulations on the Internet. Edward Cavazos quotes William Gibson says, "As described in Neuromancer, Cyberspace was a consensual hallucination that felt and looked like a physical space but actually was a computer-generated construct representing abstract data." (1) When Gibson coined that phrase he had no idea that it would become the household word that it is today. "Cyberspace now represents a vast array of computer systems accessible from remote physical locations." (Cavazos 2) The Internet has grown explosively over the last few years. "The Internet's growth since its beginnings in 1981. At that time, the number of host systems was 213 machines. At the time of this writing, twelve years later, the number has jumped to 1,313,000 systems connecting directly to the Internet." (Cavazos 10) "Privacy plays a unique role in American law." (Cavazos 13) Privacy is not explicitly provided for in the Constitution, yet most of the Internet users remain anonymous. Cavazos says, "Computers and digital communication technologies present a serious challenge to legislators and judges who try to meet the demands of economic and social change while protecting this most basic and fundamental personal freedom." Networks and the Internet make it easy for anyone with the proper equipment to look at information based around the world instantly and remain anonymous. "The right to conduct at least some forms of speech activity anonymously has been upheld by the U.S. Supreme Court." (Cavazos 15) In cyberspace it is extremely uncommon for someone to use their given name to conduct themselves, but rather they use pseudonyms or "Handles". (Cavazos 14) Not only is it not illegal to use handles on most systems, but the sysop (System Operator) does not have to allow anyone access to his data files on who is the person behind the handle. Some sysops make the information public, or give the option to the user, or don't collect the information at all. The Internet brings forth many new concerns regarding crime and computers. With movies like Wargames, and more recently Hackers, becoming popular, computer crime is being blown out of proportion. "The word Hacker conjures up a vivid image in the popular media." (Cavazos 105) There are many types of computer crime that fall under the umbrella of "Hacking". Cavazos says, "In 1986 Congress passed a comprehensive federal law outlawing many of the activities commonly referred to as 'hacking.'" (107) Breaking into a computer system without the proper access being given, traditional hacking, is illegal; hacking to obtain financial information is illegal; hacking into any department or agency of the United States is illegal; and passing passwords out with the intent for others to use them to hack into a system without authorization is also illegal. "One of the more troubling crimes committed in cyberspace is the illicit trafficking in credit card numbers and other account information." (Cavazos 109) Many people on the Internet use their credit cards to purchase things on-line, this is a dangerous practice because anyone with your card number can do the samething with your card. Millions of dollars worth of goods and services a year are stolen using credit card fraud. No matter how illegal, many are not caught. With the use of anonymous names and restricted access to provider's data on users, it becomes harder to catch the criminals on-line. The "[Wire Fraud Act] makes it illegal for anyone to use any wire, radio, or television communication in interstate or foreigncommerce to further a scheme to defraud people of money or goods." (Cavazos 110) This is interpreted to include telephone communications, therefore computer communication as well. There is much fraud on the Internet today, and the fraud will continue until a feasable way to enforce the Wire Fraud Act comes about. Cavazos continues, "unauthorized duplication, distribution, and use of someone else's intellectual property is subject to civil and criminal penalties under the U.S. Copyright Act." (111) This "intellectual property" is defined to include computer software. (Cavazos 111) Software piracy is very widespread and rampant, and was even before the Internet became popular. The spread of Computer Viruses has been advanced by the popularity of the Internet. "A virus program is the result of someone developing a mischievous program that replicates itself, much like the living organism for which it is named." (Cavazos 114) Cyberspace allows for the rapid transfer and downloading of software over the entire world, this includes viruses. If a file has been corrupted before you download it, you are infecting your system. If you then give any software in any medium to any other user you run the risk of spreading the virus, just as if you had taken in a person sick with the bubonic plague. "Whatever the mechanism, there can be no doubt that virus software can be readily found in cyberspace." (Cavazos 115) The Electronic Communications Privacy Act was enacted to protect the rights of the on-line users within the bounds of the United States. "Today the Electronic Communications Privacy Act (ECPA) makes it illegal to intercept or disclose private communications and provides victims of such conduct a right to sue anyone violating its mandate." (Cavazos 17) There are exceptions to this law; if you are a party of the communication you can release it to the public, your provider can use the intercepted communication in the normal course of employment, your provider can intercept mail for the authorities if ordered by a court, if the communication is public, and your provider can intercept communications to record the fact of the communication or to protect you from abuse of the system. If you are not cardful as a criminal then you will get caught, and the number of careful criminals are increasing. Says Cavazos, "a person or entity providing an electronic communication service to the public shall not knowingly divulge to anyone the contents o fa communication while in electronic storage on that service." (21) The sysop is not allowed to read your e-mail, destroy your e-mail before you read it, nor is he allowed to make public your e-mail unless that e-mail has already been made public. "Many systems monitor every keystroke entered by a user. Such keystroke monitoring may very well constitute an interception for the purposes of the ECPS." (18) If the U/S/ Government is going to continue to place restrictions on the Internet then soon we will have to do away with free speech and communications. Says Kirsten Macdissi, "Ultmiately, control will probably have to come from the user or a provider close to the individual user..." (1995, p.1). Monitoring individual users is still not the answer; to cut down on fraud and other law violations a new system must be devised to monitor the Internet that does not violate the right to privacy and does not prevent adults from having a right to free speech. The Constitution reads, "Amendment 1. Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances." (1776). If you read the Communications Decency Act of the Telecommunications Reform Act, there are now seven words taht cannot be used on the Internet. A direct abridgement of the right to choose what we say. Yet the providers, who have the right to edit what is submitted, choose let many things slide. The responsibility should lie with the provider, not the U.S. Government. Says Macdissi, "As an access provider, Mr. Dale Botkin can see who is connected, but not what they are doing." (1995). Yet almost all providers keep a running record of what files are communivated through their servers. Macdissi quoted Mr. Dale Botkin, president of Probe Technology, an Internet access provider, as saying, "'There is a grass roots organization called Safe Surf,' Botkin said. 'What they've done is come up with a way for people putting up information on the internet to flag it as okay or not okay for kids.'" (1995). The system is idiot proof. The information provider flags his web page as appropriate for children, the Safe Surf program will connect to the site. If the information provider chooses to not flag his web page, or to flag it as inappropriate for children, the Safe Surf program will not connect to the site. If this, or something similar, were mandated for the Internet the Communications Decency Act would be unnecessary. Says Eric Stone, an Internet user, "[The C.D.A.] attempts to place more restrictive constraints on the conversation in cyberspace than currently exist in the Senate Cafeteria..." (1996). The liability is still with the end-user. The American, or fireigner, who sits in front of their computer everyday to conduct business, chat with friends, or learn about something he didn't know about before. For us to take liability away from the end-user we must lay the liability on either the providers or on the system operators. Cavazos says, "the Constitution only provides this protection where the government is infringing on your rights." (1994). When the providers and system operators censor the users it is called editorial discretion. When the Government does it, it is infringement of privacy. So why are we still trying to let the Government into our personal and private lives? The popularity of the C.D.A. with the unknowledgeable and the right conservatives makes it a very popular law. The left ant the knowledgeable are in the minority, so our power to change this law is not great. The Government also won't listen to us because we pose less of a threat than the majority in re- election. This law will have to be recognized for what it is, a blatant violation of the First Amendment right of free speech, by the average citizen before the C.D.A. will be changed. Works Cited Cavazos, E. (1994) Cyberspace and the law: Your rights and duties in the on-line world. Boston: MIT Press Macdissi, K. (1995) Enforcement is the problem with regulation of the Internet. Midlands Business Journal Stone, E. (1996) A Cyberspace independence declaration. Unpublished Essay, Heretic@csulb.com (E-Mail address) f:\12000 essays\technology & computers (295)\the computer modem.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ First of all I would like to start with an introduction I chose this topic because I thought it would be interesting to learn about how a modem works in a computer. With modem we are able to access the Internet BBS' or Bulletin Board Systems. The MODEM is one of the smartest computer hardware tools ever created. modem is an abbreviation of Modulator De Modulator it is fairly simple to explain; through the telephone lines we are able to send messages between one single computer or a group of computers. The Originating computer sends a coded message to the Host computer which decodes it and there we have the power to access the Internet, talk to other people through terminal programs and retrieve files from other computers. The first patented computer modem was made by Hayes in the early eighties and from there they rapidly developed the first modem speed was 300 baud and from there a 600 baud than 1200 and so on. The fastest modem made today is a 56k which is very fast. Not as fast as ISDN (The Wave offered through Rogers cable) or even as advanced as Satellite modem. Most people now have 14.4 or 28.8 baud modems (Baud is "Slang" for Baud Rate Per Second) the reason for the increase in 14.4 and 28.8's is that they are cheap and fairly recent and haven't gone out of date yet. There are two types of modem external and external modems internal plugs into a 16 bit port inside your computer and external connects through either a serial (mouse)port or a parallel (printer)port most people like the external modems because they don't take up an extra space in your computer (according to PC Computing) prices in modems range price from $100 (28.8bps) to $500(software upgradable 56k). Facsimile machines also have a form of modem in them, usually a 2400baud modem to decode the message. So imagine a world without the modem for a second; NO fax NO Internet NO direct computer communications whatsoever. The three major modem manufactures are Hayes (original modem) US Robotics and Microsoft. In conclusion life today it would be very hard to live without modems some businesses would cease to exist due to ill communications between offices and without modems we wouldn't have videoconfrencing e-mail and other tools we have come to rely on in the past 15 years not to mention the phone companies loss not having to put in all of those extra phone lines because normal "voice" lines are tied up due to modem use. I believe that the modem is a very important and interesting tool of communication and the Internet is wonderful for knowledge due to the fact that is where I got almost all of my information today. Thank-You for reading my independent study and I hope you learned something from it. f:\12000 essays\technology & computers (295)\The Computer.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computers Computer are electronic device that can receive a set of instructions, or program, and then carry out a program by performing calculations on numbered data or by compiling and correlating other forms of information. The old world of technology could not believe about the making of computers. Different types and sizes of computers find uses throughout our world in the handling of data including secret governmental files and making banking transactions to private household accounts. Computers have opened up a new world in manufacturing through the developments of automation, and they have made modern communication systems. They are great tools in almost everything you want to do research and applied technology, including constructing models of the universe to producing tomorrow's weather reports, and their use has in itself opened up new areas of development. Database services and computer networks make available a great variety of information sources. The same new designs also make possible ideas of privacy and of restricted information sources, but computer crime has become a very important risk that society must face if it would enjoy the benefits of modern technology. Two main types of computers are in use today, analog and digital, although the term computer is often used to mean only the digital type. Everything that a digital computer does is based on one operation the ability to determine if a switch, or gate is open or closed. That is, the computer can recognize only two states in any of its microscopic circuits on or off, high voltage or low voltage, or-in the case of numbers-0 or 1. The speed at which the computer performs this simple act, however, is what makes it a marvel of modern technology. Computer speeds are measured in megaHertz, or millions of cycles per second. A computer with a "clock speed" of 10 mHz-a fairly representative speed for a microcomputer-is capable of executing 10 million discrete operations each second. Business microcomputers can perform 15 to 40 million operations per second, and supercomputers used in research and defense applications attain speeds of billions of cycles per second. Digital computer speed and calculating power are further enhanced by the amount of data handled during each cycle. If a computer checks only one switch at a time, that switch can represent only two commands or numbers; thus ON would symbolize one operation or number, and OFF would symbolize another. By checking groups of switches linked as a unit, however, the computer increases the number of operations it can recognize at each cycle. The first adding machine, a precursor of the digital computer, was devised in 1642 by the French philosopher Blaise Pascal. This device employed a series of ten-toothed wheels, each tooth representing a digit from 0 to 9. The wheels were connected so that numbers could be added to each other by advancing the wheels by a correct number of teeth. In the 1670s the German philosopher and mathematician Gottfried Wilhelm von Leibniz improved on this machine by devising one that could also multiply. The French inventor Joseph Marie Jacquard , in designing an automatic loom, used thin, perforated wooden boards to control the weaving of complicated designs. Analog computers began to be built at the start of the 20th century. Early models calculated by means of rotating shafts and gears. Numerical approximations of equations too difficult to solve in any other way were evaluated with such machines. During both world wars, mechanical and, later, electrical analog computing systems were used as torpedo course predictors in submarines and as bombsight controllers in aircraft. Another system was designed to predict spring floods in the Mississippi River Basin. In the 1940s, Howard Aiken, a Harvard University mathematician, created what is usually considered the first digital computer. This machine was constructed from mechanical adding machine parts. The instruction sequence to be used to solve a problem was fed into the machine on a roll of punched paper tape, rather than being stored in the computer. In 1945, however, a computer with program storage was built, based on the concepts of the Hungarian-American mathematician John von Neumann. The instructions were stored within a so-called memory, freeing the computer from the speed limitations of the paper tape reader during execution and permitting problems to be solved without rewiring the computer. The rapidly advancing field of electronics led to construction of the first general-purpose all-electronic computer in 1946 at the University of Pennsylvania by the American engineer John Presper Eckert, Jr. and the American physicist John William Mauchly. (Another American physicist, John Vincent Atanasoff, later successfully claimed that certain basic techniques he had developed were used in this computer.) Called ENIAC, for Electronic Numerical Integrator And Computer, the device contained 18,000 vacuum tubes and had a speed of several hundred multiplications per minute. Its program was wired into the processor and had to be manually altered. The use of the transistor in computers in the late 1950s marked the advent of smaller, faster, and more versatile logical elements than were possible with vacuum- tube machines. Because transistors use much less power and have a much longer life, this development alone was responsible for the improved machines called second-generation computers. Components became smaller, as did intercomponent spacings, and the system became much less expensive to build. Different types of peripheral devices-disk drives, printers, communications networks, and so on-handle and store data differently from the way the computer handles and stores it. Internal operating systems, usually stored in ROM memory, were developed primarily to coordinate and translate data flows from dissimilar sources, such as disk drives or co-processors (processing chips that perform simultaneous but different operations from the central unit). An operating system is a master control program, permanently stored in memory, that interprets user commands requesting various kinds of services, such as display, print, or copy a data file; list all files in a directory; or execute a particular program. A program is a sequence of instructions that tells the hardware of a computer what operations to perform on data. Programs can be built into the hardware itself, or they may exist independently in a form known as software. In some specialized, or "dedicated," computers the operating instructions are embedded in their circuitry; common examples are the microcomputers found in calculators, wristwatches, automobile engines, and microwave ovens. A general-purpose computer, on the other hand, contains some built-in programs (in ROM) or instructions, in a chip, but it depends on external programs to perform useful tasks. Once a computer has been programmed, it can do only as much or as little as the software controlling it at any given moment enables it to do. Software in widespread use includes a wide range of applications programs-instructions to the computer on how to perform various tasks. f:\12000 essays\technology & computers (295)\The Computer2.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ |__ __| |_| | ___| | __| _ | \ / | _ | | | |_ _| ___| _ | | | | _ | ___| | |__| |_| | \ / | __| |_| | | | | ___| | |_| |_| |_|____| |____|_____|_|\_/|_|_| |_____| |_| |____|_|\_\ This report is about the impact that the personal computer has made during the past 10 years on the the community. It is a report that includes detailed information about the personal computer and the way it has worked its way into a lot of peoples everyday lives. It includes information about the Internet and how it has shaped peoples life from just a hobby and into an obsession. It includes detailed information about its history, especially the time in which it was first developed. There is information about future possibilities for the computer about who it could be the future and destroy the future. There is a description on how it is developed and an in-depth look at how it works. A personal computer is a machine that lets you do do just about everything you could think of. You can do some basic word-processing and spreadsheets as well as 'Surf the Internet'. You can play the latest computer games by yourself as well as against someone from across the other side of the world. It can store databases which could contain information that is kept by police for easier records or you could just use it for your own family history. The basic structure of a computer is a keyboard, a moniter, a and case which holds all the componets to make a computer run like a Hard drive, a Motherboard, and a Video card. There are many other additions you can make to this such as a Modem, a Joystick, and a Mouse. The personal computer was developed during the year 1945 by the Americans to help them decode enemy secret codes during the Second World War. At this time the computers were huge and only used by governments because they were as big as room. This was because the main thing they used were vacuum valves which made the computer enormous. They also never had anything to hold any memory so they couldn't actually be classed as a true computer. The introduction of a way to store a file was brought around in the year 1954. The computer did not have a big impact on the community until about the year 1985 Commodore released a gange of computers called the Commodore 64 and also another Commodore computer called the Vic 20 which was released in the year 1982. When Intel saw the Commodore 64's success it released its brand new 386 processor in the year 1985. Though the 386 was easily the better and faster processor the Commodore 64 seemed to be the computer getting all the attention because of it's lower prices so therefore it appealed to a much wider group of people. The 386 was only in the price range of the mega-rich and agencies. The effect of the Commodore 64 was enormous because it seemed to turn people away from throwing away their money on Arcade games such as Pac-man and Pong when they could be playing them in the conveiniance of their own homes and without leaving a dent in the change pocket. This marked the fall of arcade and the rise of the computer. Arcade companies such as Namco were forced to make computer games from then if they were to make any money. The most specific event to help the rise of the home computer was the invention of the transistor. Before the transistor the information could only travel through a vacuum valve. Then the transistor came along and because of its size it reduced the size of a computer enormously. With the transistor being smaller they could fit more into a small space and with the more transistors ther was more activity within them which in turn made them faster. Another worthy event was in the year 1954 when the first writable disk was invented. It was a great achievment because instead of just being able to work out sums and display them they were able to store some of the current information for future reference. This way the computer didn't have to do so much work and therefore it made them evn quicker at doing sums and cracking codes. f:\12000 essays\technology & computers (295)\The Dependability of the Web.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Dependability of the Web by Nathan Redman A new age is upon us - the computer age. More specifically, the internet. There's no doubt that the internet has changed how fast and the way we communicate and get information. The popularity of the net is rising at a considerable rate. Traffic, the number of people on the web, is four times what it was a year ago and every thirty seconds someone new logs on to the net to experience for themselves what everyone is talking about. Even Bill Gates, founder of Microsoft, a company that really looks into the future in order to stay ahead of the competition, said the internet is one part of the computer business that he really underestimated. As one of the richest men in the world I doubt if he makes too many mistakes. Nobody could have predicted that the internet would expand at this rate and that's probably the reason why troubles are arising about the dependability of the web. As usage soars (some estimate there will be over eighty million users by the end of 1997) could the internet overload? Even though no one predicted the popularity of the net some are quick to foresee the downfall or doomsday of this fade. If you call it a fade. The demand continues to rise and is now so great that technological improvements are continually needed to handle the burden that's been created from all of the people using the net. Their are many things that can lighten the load that's been slowing down the internet. First, it needs to have a lot better organization because with over seventy-five million web pages, and rising steadily fast, being able to pin-point the information you are trying to find is very difficult. It's like finding a needle in a haystack. When you use a search engine to find information on a certain topic the search can come back with thousands upon thousands of pages or sites. Sometimes with over fifty-thousand. Now you have the never ending task of looking through each site to find what you want. Plus, with links on most pages people can end up getting lost real fast and not being able to find their way back. Search engines should develop what I call the filter down affect. In the filter down affect a broad topic is chosen and then sub-topics are chosen continuously until the list of sites has narrowed the search to respectable sites and the information needed. Having better organization would remove some of the worthless sites around cyperspace from ending up first in the search engines results. A second way to lighten the load of the internet is by improving the time it takes to load a page. The speed of loading a page depends greatly on the type of equipment, software, how much memory you have, your connection speed, plus a lot of other factors, but web page designers should make more text and less graphics, animation, and video because text takes a lot less time to load. According to an October 1996 survey by AT&T, sixty-seven percent of sites were "off the net" for at least on hour per day because of overload. The web wasn't designed to handle graphics, animation, audio, and video. It was first developed for E-mail (electronic mail) and transferring text files, but web page designers want their pages to look the best so that people or for in the case of business, potential customers, visit their site and are impressed by it so they come back in the future and tell others about the site. After all, the best way to have people visit your web site is by word of mouth because it is very hard for people to find the site unless they happen to know the exact address. Sometimes though, popularity can kill a site. For instance, when the weather got bad in Minnesota one weekend the National Weather Service web site got overloaded with people wanting to read and see the current weather forecasts. The result was the server going down from the overload leaving nobody the ability to visit that site. With more businesses seeing the dollar signs that the web could produce they compete for advertising on the web because it is much cheaper then advertising on television or the newspaper and it can reach people all the way around the world. Designers for these pages can't forget that surfers aren't willing to wait very long for pages to load so simplifying pages can make the web a lot faster. Another way to make things faster is to make sure that servers can handle the load applied to them. Internet providers want to make money just like all other businesses so they try fitting as many customers on to one server as they can. Putting less people on each server would create faster service. Also popular businesses or sites should have big enough capacities to handle the amount of people that visit. Slow servers will lose a lot of business. Internet providers and businesses should look at future capacities and not just to current loads. As in the case of doomsday more and more fears of logging into cyberspace are beginning to receive attention. As mentioned above, speed is a major concern. Besides what is recommended, technological improvements need to be developed. For example, bigger pipelines (lines carrying computer data) like fiber optics and satellite transmission are receiving high ratings from people, but like all good things in life putting bigger pipelines in the ground takes a lot of time and money. If the government or private industries are willing to lay the foundation for putting in faster lines it will change the world just like the railroad tracks in the 1800's. Another major fear of people on the superhighway of information is security. Hackers (people that get into data on computers they aren't suppose to) can hack into a lot of private information on the information superhighway. In reality it isn't any different then credit cards and valuables being stolen in the real world. Their is currently cybercops surfing the web looking for illegal happenings on the information superhighway. Patrolling the web is only one way to help put a stop to hackers. Encrypting and making better security software needs to be developed along with other computer technology to help control hackers or cybercriminals. Theirs no denying the fact that the internet is very powerful in today's world. It combines text, audio, animation, video and graphics all in one and at the click of a button you can receive entertainment and news, communicate with people around the world, do your banking, reserve tickets, buy, sell and trade merchandise or collectibles, or even order dinner for the night. These are just a few things that can be accomplished on the net. People aren't only attracted to what the internet has to offer now but to what will be available in the near future. Some day computers will replace our television, radio, answering machine, and telephone. Technology is developing so rapidly that things not even imaginable will be developed to make our lives easier but more confusing. After all, no one predicted where the net is today and how fast it would it would develop. f:\12000 essays\technology & computers (295)\The Enviroment is going to hell.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ This is the litany: Our resources are running out. The air is bad, the water worse. The planet's specis are dying off-more exactly, we're killing them -at the staggering rate of 100,000 per year, a figure that works out to almost 2000 species per weak, 300 per day, 10 per hour, another dead species every 6 minutes. We're trashing the planet, washing away the topsoil, paving over our farmlands, systematically deforesting our wildernesses, decimating the biota, and ultimately killing ourselves. The world is getting progressively poorer, and it's all because of populating, or more precisely, over- population. There's a finite store of resources on our pale blue dot, spaceship Earth, our small and fragile tiny planet, and we're fast apporaching it's ultimate carrying capacity. The limits to growth are finally upon us, and we're living on borrowed time. The laws of population growth are inexorable, Unless we act decisively, the final result is written in stone: mass poverty, famine, starvation, and death.Time is short, and we have to act now. That's the standard adn canonical litany. It's been drilled into our heads so far and so focefuly that to hear it yet once more is ... well, it's almost reasuring. It's comforting, oddly consoling- at least we're face to face with the enemies: consumption, population, mindless growth. And we know the solution: cut back, contract, make do with less. "Live simply so that others may simply live." There's just one problem with The Litany, just one slight little wee imperfection: every item in that dim and dreary recititaion, each and every last claim, is false. Incorrect. At variance with the truth. Not the way it is, folks. f:\12000 essays\technology & computers (295)\The evelution of the microprossesor.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The evaluation of the microprocessor. The microprocessor has changed a lot over the years, says (Michael W. Davidson,http://micro.magnet.fsu.edu/chipshot.html) Microprocessor technology is progressing so rapidly that even experts in the field are having trouble keeping up with current advances. As more competition develops in this $150 billion a year business, power and speed of the microprocessor is expanding at an almost explosive rate. The changes have been most evident over the last decade. The microprocessor has changed the way computers work by making them faster. The microprocessor is often called the brain of the C.P.U.(or the central processing unit)and without the microprocessor the computer is more or less useless. Motorola and Intel have invented most of the microprocessors over the last decade. Over the years their has been a constant battle over cutting edge technology. In the 80's Motorola won the battle, but now in the 90's it looks as Intel has won the war. The microprocessor 68000 is the original microprocessor(Encarta 95). It was invented by Motorola in the early 80's. The 68000 also had two very distinct qualities like 24-bit physical addressing and a 16-bit data bus. The original Apple Macintosh ,released in 1984, had the 8-MHz found at the core of it. It was also found in the Macintosh Plus, the original Macintosh SE, the Apple Laser-Writer IISC, and the Hewlett- Packard's LaserJet printer family. The 68000 was very efficient for its time for example it could address 16 megabytes of memory, that is 16 more times the memory than the Intel 8088 which was found in the IBM PC . Also the 68000 has a linear addressing architecture which was better than the 8088's segmented memory architecture because it made making large applications more straightforward. The 68020 was invented by Motorola in the mid-80's(Encarta 95). The 68020 is about two times as powerful as the 68000. The 68020 has 32-bit addressing and a 32-bit data bus and is available in various speeds like 16MHz, 20MHz, 25MHz, and 33MHz. The microprocessor 68020 is found in the original Macintosh II and in the LaserWriter IINT both of which are from Apple. The 68030 microprocessor was invented by Motorola about a year after the 68020 was released(Encarta 95). The 68030 has 32-bit addressing and a 32-bit data bus just like it's previous model, but it has paged memory management built into it, delaying the need for additional chips to provide that function. A 16-MHz version was used in the Macintosh IIx, IIcx, and SE/30. A 25-MHz model was used in the Mac IIci and the NeXT computer. The 68030 is produced in various versions like the 20-MHz, 33MHz, 40-MHz, and 50MHz. The microprocessor 68040 was invented by Motorola(Encarta 95). The 68040 has a 32-bit addressing and a 32-bit data bus just like the previous two microprocessors. But unlike the two previous microprocessors this one runs at 25MHz and includes a built-in floating point unit and memory management units which includes 4-KB instruction and data coaches. Which just happens to eliminate the need additional chips to provide these functions. Also the 68040 is capable of parallel instruction execution by means of multiple independent instruction pipelines, multiple internal buses, and separate caches for both data and instructions. The microprocessor 68881 was invented by Motorola for the use with both microprocessor 68000 and the 68020(Encarta 95). Math coprocessors, if supported by the application software, would speed up any function that is math-based. The microprocessor 68881 does this by additional set of instructions for high-proformance floating point arithmetic, a set of floating-point data registers, and 22 built-inconstants including p and powers of 10. The microprocessor 68881 conforms to the ANSI/IEEE 754- 1985 standard for binary floating-point arithmetic. When making the Macintosh II, Apple noticed that when they added a 68881, the improvement in performance of the interface, and thus the apparent performance was changed dramatically. Apple then decided to add it as standard equipment. The microprocessor 80286, also called the 286was invented by Motorola in 1982(Encarta 95). The 286 was included in the IBM PC/AT and compatible computers in 1984. The 286 has a 16-bit resister, transfers information over the data bus 16 bits at a time, and use 24 bits to address memory location. The 286 was able to operate in two modes real (which is compatible with MS-DOS and limits the 8086 and 8088 chips) and protected ( which increases the microprocessor's functionality). Real mode limits the amount of memory the microprocessor can address to one megabyte; in protected mode, however the addressing access is increased and is capable of accessing up to 16 megabytes of memory directly. Also, an 286 microprocessor in protected mode protects the operating system from mis-behaved applications that could normally halt (or "crash") a system with a non-protected microprocessor such as the 80286 in real mode or just the plain old 8088. The microprocessor 80386dx also called the 386 or the 386dx was invented in 1985(Encarta 95). The 386 was used in IBM and compatible microcomputers such as the PS/2 Model 80. The 386 is a full 32-bit microprocessor, meaning that it has a 32-bit resister, it can easily transfer information over its data bus 32 bits at a time, and it can use 32 bits in addressing memory. Like the earlier 80286, the 386 operates in two modes, again real (which is compatible with MS-DOS and limits the 8086 and 8088 chips) and protected ( which increases the microprocessor's functionality and protects the operating system from halting because of an inadvertent application error.) Real mode limits the amount of memory the microprocessor can address to one megabyte; in protected mode, however the total amount of memory that the 386 can address directly is 4 gigabytes, that is roughly 4 billion bytes. The 80386dx also has a virtual mode, which allows the operating systems to effectively divide the 80386dx into several 8086 microprocessors each having its own 1-megabyte space, allowing each "8086" to run its own program. The microprocessor 80386sx also called the 386sx was invented by Intel in 1988 as a low-cost alternative to the 80386DX(Encarta 95). The 80386SX is in essence an 80386DX processor limited by a 16-bit data bus. The 16-bit design allows 80386SX systems to be configured from less expensive AT-class parts, ensuring a much lower complete system price. The 80386SX offers enhanced performance over the 80286 and access to software designed for the 80386DX. The 80386SX also offers 80386DX comforts such as multitasking and virtual 8086 mode. The microprocessor 80387SX also called the 387SX was invented by Intel(Encarta 95). A math, or floating-point, coprocessor from Intel for use with the 80386SX family of microprocessors. The 387sx is available in a 16-MHz version only, the 80387SX, if supported by the application software, can dramatically improve system performance by offering arithmetic, trigonometric, exponential, and logarithmic instructions for the application to use-instructions not offered in the 80386SX instruction set. The 80387SX also offers perfect operations for sine, cosine, tangent, arctangent, and logarithm calculations. If used, these additional instructions are carried out by the 80387SX, freeing the 80386SX to perform other tasks. The 80387SX is capable of working with 32- and 64-bit integers, 32-, 64-, and 80-bit floating-point numbers, and 18-digit BCD (binary coded decimal) operands; it coincides to the ANSI/IEEE 754-1985 standard for binary floating-point arithmetic. The 80387SX operates individually on the 80386SX's mode, and it performs as expected regardless of whether the 80386SX is running in real, protected, or virtual 8086 mode. The microprocessor mi486 also called the 80486 or the 486 was invented in 1989 by Intel(Encarta 95). Like its 80386 predecessor, the 486 is a full-bit processor with 32-bit registers, 32-bit data bus, and 32-bit addressing. It includes several enhancements, however, including a built-in cache controller, the built-in equivalent of an 80387 floating- point coprocessor, and provisions for multiprocessing. In addition, the 486 uses a "pipeline" execution scheme that breaks instructions into multiple stages, resulting in much higher performance for many common data and integer math operations. In conclusion it is evident by the following that microprocessors are developing at leaps and bounds and it is not surprising that if by the time it hits the teacher's desk or by the time you read this the next superchip will be developed(Encarta 95). f:\12000 essays\technology & computers (295)\THe Evolution of the PC and Microsoft.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Kasey Anderson 2/21/97 Computer Tech. ESSAY The Evolution of the PC Xerox, Apple, IBM, and Compaq all played major roles in the development of the Personal Computer, or ³PC,² and the success of Microsoft. Though it may seem so, the computer industry did not just pop-up overnight. It took many years of dedication, hard-work, and most importantly, thievery to turn the personal computer from a machine the size of a Buick, used only by zit-faced ³nerds,² to the very machine I am typing this report on. Xerox started everything off by creating the first personal computer, the ALTO, in 1973. However, Xerox did not release the computer because they did not think that was the direction the industry was going. This was the first of many mistakes Xerox would make in the next two decades. So, in 1975, Ed Roberts built the Altair 80800, which is largely regarded as the first PC. However, the Altair really served no real purpose. This left computer-lovers still yearning for the ³perfect² PC...actually, it didnıt have to be perfect, most ³nerds² just wanted their computer to do SOMETHING. The burning need for a PC was met in 1977, when Apple, a company formed by Steve Jobs and Steve Wozniak, released itıs Apple II. Now the nerds were satisfied, but that wasnıt enough. In order to catapult the PC in to a big-time product, Apple needed to make it marketable to the average Joe. This was made possible by Visical, the home spread sheet. The Apple II was now a true-blue product. In order to compete with Appleıs success, IBM needed something to set its product apart from the others. So they developed a process called ³open architecture.² Open architecture meant buying all the components separately, piecing them together, and then slapping the IBM name on it. It was quite effective. Now all IBM needed was software. Enter Bill Gates. Gates, along with buddy Paul Allen, had started a software company called Microsoft. Gates was one of two major contenders for IBM. The other was a man named Gary Kildall. IBM came to Kildall first, but he turned them away (He has yet to stop kicking himself) and so they turned to Big Bad Bill Gates and Microsoft. Microsoft would continue supplying IBM with software until IBM insisted Microsoft develop Q/DOS, which was compatible only with IBM equipment. Microsoft was also engineering Windows, their own separate software, but IBM wanted Q/DOS. By this time, PC clones were popping up all over. The most effective clone was the Compaq. Compaq introduced the first BIOS (Basic Input-Output System) chip. The spearheaded a clone market that not only used DOS, but later Windows as well, beginning the incredible success of Microsoft. With all of these clones, Apple was in dire need of something new and spectacular. So when Steve Jobs got invited to Xerox to check out some new systems (big mistake), he began drooling profusely. There he saw the GUI (graphical user interface), and immediately fell in love. SO, naturally, Xerox invited him back a second time (BBBBIIIIGGGG mistake) and he was allowed to bring his team of engineers. Apple did the obvious and stole the GUI from Xerox. After his own computer, the LISA, flopped, Jobs latched on to the project of one of his engineers. In 1984, the Apple Macintosh was born. Jobs, not wanting to burden his employees with accolades, accepted all of the credit. Even with the coveted GUI, Apple still needed a good application. And who do you call when you need software? Big Bad Bill Gates. Microsoft designed ³desktop publishing² for Apple. However, at the same time, Gates was peeking over Jobsıs shoulder to get some ³hints² to help along with the Windows production. About the same time, IBM had Microsoft design OS/2 for them so they could close the market for clones by closing their architecture. This was the last straw for Microsoft. They designed OS/2 and then split with IBM to concentrate fully on Windows. The first few versions of Windows were only mediocre, but Windows 3.0 was the answer to what everyone wanted. However, it did not have itıs own operating system, something that Windows Œ95 does. 3.0 sold 30 million copies in its first year, propelling Microsoft to success. So, neither the PC industry nor Microsoft was built overnight. Each owes a lot to several different people and companies. Isnıt it amazing that so much has developed in just twenty-three years? Hereıs something even more amazing. Remember the ALTO? Guess what it had... a GUI, a mouse, a networking system, everything. So maybe we havenıt come all that far. f:\12000 essays\technology & computers (295)\The Future of Computer Crime in America .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Sociology Research Paper Sociology per. #2 10/8/96 The Future of Computer Crime in America Sociology Topics: Future Society Social Change Social and Enviromental Issues Deviant Behavior Crime/Corrections Name: Brandon Robinson Period: # 2 Page 1 The proliferation of home computers, and of home computers equipped with modems, has brought about a major transformation in the way American society communicates, interacts, and receives information. All of these changes being popularized by the media and the wide increased personal and private sector use of the Internet. All of these factors plus the fact of more and more business and government institutions are jumping to make the use of these services has put a much wider range of information at the finger tips of those, often select and few individuals whom know how to access, understand and use these information sources. Often times today this information is of a very sensitive and private nature on anything from IRS Tax returns, to Top Secret NASA payload launch information. Piled on top of that many times the individuals accessing these information sources are doing so by illegal means and are often motivated by deviant and illegal means. It is said that at any given time the average American has his name on an active file in over 550 computer information databases of which nearly 90% are online, and of the 550 databases the number comes no where close to how many time your personal information is listed in some database in an unactive file. The "Average American" could simply sit in his/her home doing nearly nothing all day long and still have his/her name go through over 1,000 computers a day. All of these vast information files all hold the crucial ones and zero's of data that make up your life as you and all others know it. All of these data bits, at the hands 100,000's of people. With little or NO central control or regulatory agency to oversee the safe handling of your precious little ones and zero's of information. As it would seem Arson Wells was little late with his title of "1984" . "BIG BROTHER" is INDEED WATCHING, US ALL and as it would seem our BIG BROTHER is alot bigger then Mr. Wells could have ever imagined. And that our BIG BROTHER is EVERYWHERE! The 100,000's of people that do have this information make up our modern BIG BROTHER in the form of government institutions to private advertising companies, these people are all the "trusted" ones who use our information everyday for legal and useful purposes but what about the others who use their skills and and knowledge to gain their "own" personal and illegal access to these vast depositories of information? These individuals popularized and demonized by the media are often referred to as "Hackers" or "One who obtains unauthorized if not illegal, access to computer data systems and or networks." or the media definition "maladjusted losers forming "high-tech street gangs that are dangerous to society" (Chicago Tribune, 1989) Which ever one is best fitted they are indeed becoming a very serious issue and worry to some in our ever and constantly changing American Techno Society. Because of the serious delection by our elected representatives whom have valiantly once again failed to keep up with the ever changing times, there is if any major or clear and easy to understand "CONSTITUTIONAL" (The recent 3 to 1 over turn of the not only controversial but deemed UNconstituional law culled the Communications Decency Act") laws as to the Page 2 governing of the vastly wild and uncharted realms of cyberspace. The flagrant and serious if not slightly laughable attempts of our technologically illiterate and ignorant masses of elected officials. Sends a clear S.O.S. message to the future generations of America to not only LOCK you PHYSICAL DOORS but also LOCK and double LOCK all or your COMPUTER DOORS as well. In order for this society to evolve efficiently with our ever changing technology rate. We as the masses are going to have to keep abreast with the current events that are lurking out in the depths of cyberspace. Before we, as a result of our inability to adapt and our arrogance and ignorance, are all products of our own technological over indulgence. So to avoid the tragic and ending collision of our own self manufactured technological self-destruction and the break-down of our society, in every tangible aspect, as we know of it today. I believe that in the future we are headed towards you will see our society divided into Two major parts 1.) Those whom pose the knowledge and capability to obtain the knowledge/information i.e.. the "LITERATE" and 2.) Those who don't pose the skills necessary to obtain that crucial knowledge/information, i.e.. the "ROAD KILL" Because in the future, the power structure will not be decided by who has the most guns or missiles or weapons but the powers structure will be made up of little tiny ones and zero's, bits of data giving to those whom ever poses the power of the knowledge and the power to manipulate who has that knowledge. The "rich" and "elitist" will be the knowledge posers and givers and the "poor" will be those with the lack of knowledge. Knowledge will bring power and wealth and the lack of will bring.....well the lack of power, wealth and knowledge. Brandon Robinson 10/8/96 864 words Sources 1.Thesis by Gordon R. Meyer "The Social Organization of the Computer Underground" 2.2600 Magazine The Hacker Quarterly 3.The Codex Magazine Monthly Security and Technical Update. 4.Secrets of a Super Hacker by the Knightmare 5.Personal Knowledge, Brandon Robinson f:\12000 essays\technology & computers (295)\The future of the internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Future Of The Internet In Today's world of computers, the internet has become part of ones regular vocabulary. The internet is everywhere, in the news, the newspaper, magazines, and entire books are written on it regularly. Its growth rate is incredible, increasing by about 10% every month (Dunkin 180). This rapid growth rate could either help the system or destroy it. The possibilities are endless on what can be done on the internet. People can tap into libraries, tap into weather satellites, download computer programs, talk to other people with related interests, and send electronic mail all across the world (Elmer-Dewitt 62). It is used by thousands of different kinds of people and organizations, like the military, businesses, colleges and universities, and common people with no specific purpose to even use it (Dunkin 180). Phillip Elmer-Dewitt stated it perfectly, "It is a place for everyone." The rapid growth of the internet has many positive aspects to it. The new technology that is developing with this rapid growth will help keep computers up to date with what is being developed on the internet. With these technological advances, systems will be faster, more powerful, and capable of doing more complicated tasks. As more people with different interests, thoughts, and ideas get involved with the internet, there will be more information available (Elmer-Dewitt 64). As the number of internet users increases, the prices will gradually decrease on internet software and organizations (Peterson 358). The best quality about the size of the internet is it is so big that it cannot be destroyed (Elmer-Dewitt 62). There are many problems with the constant growth of the internet. It's largest weakness is that it is not owned or controlled by anyone (Elmer-Dewitt 63). There is no base plan for the future of the internet (Dunkin 180). As it grows in size, there is less control of the system. Many groups are fighting for censorship, but that is in=impossible with the size of the internet (Peterson 358). With more sites and pages being added to the internet, information is becoming harder to find, and it getting more difficult to find your way around. There are also problems just like "any heavily traveled highway, including vandalism, break-ins, and traffic jams. It's like an amusement park that is so successful that there are long waits for the most popular rides." (Elmer-Dewitt 63). Right now, no one knows what direction the future of the internet will take. The future of the internet will be determined by whether the growth is just a trend or if it will keep growing and technology will keep up with it. Works Cited Dunkin, Amy. "Ready To Cruise The Internet." Business Week 28 Mar. 1994: 180- 181. Elmer-Dewitt, Philip. "First Nation In Cyberspace." Time 6 Dec. 1993: 62-64. Peterson, I. "Guiding The Growth Of The Info Highway." Science News 4 Jun. 1994: 357-358. f:\12000 essays\technology & computers (295)\The History of Computers and the Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The History of the Internet and the WWW 1. The History of the World Wide Web- The internet started out as an information resource for the government so that they could talk to each other. They called it "The Industrucable Network" because it was so many computers linked to gether that if one server went down, no-one would know. This report will mainly focus on the history of the World Wide Web (WWW) because it is the fastest growing resource on the internet. The internet consists of diferent protocals such as WWW, Gopher (Like the WWW but text based), FTP (File Transfer Protocal), and Telnet (Allows you to connect to different BBS's). There are many more smaller one's but they are inumerable. A BBS is an abreviation for Bullitin Board Service. A BBS is a computer that you can ether dial into or access from the Internet. BBS's are normally text based. 2. The Creator of the WWW- A graduate of Oxford University, England, Tim is now with the Laboratory for Computer Science ( LCS)at the Massachusetts Institute of Technology ( MIT). He directs the W3 Consortium, an open forum of companies and organizations with the mission to realize the full potential of the Web. With a background of system design in real-time communications and text processing software development, in 1989 he invented the World Wide Web, an internet-based hypermedia initiative for global information sharing. while working at CERN, the European Particle Physics Laboratory. He spent two years with Plessey elecommunications Ltd a major UK Telecom equipment manufacturer, working on distributed transaction systems, message relays, and bar code technology. In 1978 Tim left Plessey to join D.G Nash Ltd, where he wrote among other things typesetting software for intelligent printers, a multitasking operating system, and a generic macro expander. A year and a half spent as an independent consultant included a six month stint as consultant software engineer at CERN, the European Particle Physics Laboratory in Geneva, Switzerland. Whilst there, he wrote for his own private use his first program for storing information including using random associations. Named "Enquire", and never published, this program formed the conceptual basis for the future development of the World Wide Web. I could go on and on forever telling you about this person, but my report is not about him. From 1981 until 1984, Tim was a founding Director of Image Computer Systems Ltd, with technical design responsibility. In 1984, he took up a fellowship at CERN, to work on distributed real-time systems for scientific data acquisition and system control. In 1989, he proposed a global hypertext project, to be known as the World Wide Web. Based on the earlier "Enquire" work, it was designed to allow people to work together by combining their knowledge in a web of hypertext documents. He wrote the first World Wide Web server and the first client, a wysiwyg hypertext browser/editor which ran in the NeXTStep environment. This work was started in October 1990, and the program "WorldWideWeb" first made available within CERN in December, and on the Internet at large in the summer of 1991. Through 1991 and 1993, Tim continued working on the design of the Web, coordinating feedback from users across the Internet. His initial specifications of URIs, HTTP and HTML were refined and discussed in larger circles as the Web technology spread. In 1994, Tim joined the Laboratory for Computer Science (LCS)at the Massachusetts Institute of Technology (MIT). as Director of the W3 Consortium which coordinates W3 development worldwide, with teams at MIT and at INRIA in France. The consortium takes as it goal to realize the full potential of the web, ensuring its stability through rapid evolution and revolutionary transformations of its usage. In 1995, Tim Berners-Lee received the Kilby Foundation's "Young Innovator of the Year" Award for his invention of the World Wide Web, and was corecipient of the ACM Software Systems Award. He has been named as the recipient of the 1996 ACM Kobayashi award, and corecipient of the 1996 Computers and Communication (C&C) award. He has honorary degrees from the Parsons School of Design, New York (D.F.A., 1996) and Southampton University (D.Sc., 1996), and is a Distinguished Fellow of the British Computer Society. This has just been about Tim, but here is the real hsitory of the WWW. 3. History of the WWW dates - "Information Management: A Proposal" written by Tim BL and circulated for comments at CERN (TBL). Paper "HyperText and CERN" produced as background (text or WriteNow format). Project proposal reformulated with encouragement from CN and ECP divisional management. Robert Cailliau (ECP) is co-author. The name World-Wide Web was decided because the name tells you what the reasorce does. HyperText is the language that users who want homepages on the internet use to write them. (See a sample of this on last page). In November of 1990 Initial WorldWideWeb program developed on the NeXT (TBL) . This was a wysiwyg browser/editor with direct inline creation of links. This made the WWW easier to use and navigate without having to type long numbers. Technical Student Nicola Pellow (CN) joins and starts work on the line-mode browser. Bernd Pollermann (CN) helps get interface to CERNVM "FIND" index running. TBL gives a colloquium on hypertext in general. When this happend the WWW really started sprouting because this new browsers made the WWW easier to navigate. 4. History of the World Wide Web dates 1991-1993 In 1991 a line mode browser (www) released to limited audience on "priam" vax, rs6000, sun4. On the 17th of May a general release of WWW software was made avalible on Cern servers. This allowed people to start ther own internet providing such as America Online and South Carolina SuperNet. On the 12th of June a siminar was held for the WWW that allowed people to come in and see this new software in progres. I would like to skip ahead to present day because more intersting things are happening now. 5. Present Day World Wide Web and Internet reasorces- The World Wide web today is the most popular reasource on the internet. Facts show that the internet has an average 45 million users on a day with one more joining every eight seconds. The internet transmits at a maximum speed of 100mb per second. Present day internet is fast and relyable, it is also very popular. The internet started out as just a few computers linked together, and now look what we have. The internet will live on forever, and so will the WWW. I belive that the WWW will be replaced by something in the next 10 years. f:\12000 essays\technology & computers (295)\The History of Computers.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The History of Computers A computer is a machine built to do routine calculations with speed, reliability, and ease, greatly simplifying processes that without them would be a much longer, more drawn out process. Since their introduction in the 1940's. Computers have become an important part of the world. Besides the systems found in offices, and homes, microcomputers are now used in everyday locations such as automobiles, aircrafts, telephones, and kitchen appliances. Computers are used for education as well, as stated by Rourke Guides in his book, Computers: Computers are used in schools for scoring examination papers, and grades are sometimes recorded and kept on computers (Guides 7). "The original idea of a computer came from Blaise Pascal, who invented the first digital calculating machine in 1642. It performed only additions of numbers entered by dials and was intended to help Pascal's father, who was a tax collector" (Buchsbaum 13). However, in 1671, Gottfried Wilhelm von Leibniz invented a computer that could not only add but, multiply. Multiplication was quite a step to be taken by a computer because until then, the only thing a computer could do was add. The computer multiplied by successive adding and shifting (Guides 45). Perhaps the first actual computer was made by Charles Babbage. He explains himself rather well with the following quote: "One evening I was sitting in the rooms of the Analytical Society at Cambridge with a table full of logarithms lying open before me. Another member coming into the room, and seeing me half asleep called out, 'Well Babbage, what are you dreaming about?', to which I replied, 'I am thinking that all these tables might be calculated by machinery'"(Evans 41). "The first general purpose computer was invented in 1871 by Charles Babbage, just before he died"(Evans 41). It was still a prototype of course, but it was a beginning. Around this time, there was little or no interest in the development of computers. People feared, due to the lack of their knowledge, that computers would take over everything and run their lives (Buchsbaum 9). If only these 18th century Americans, who were ignorant to the necessity of computers, would have known the many benefits they were missing out on, they would have more readily funded individuals such as Charles Babbage. As Glossbrenner states in The Complete Handbook of Personal Computers, Computers are great information resources: Computers are great conversationalists. Its' keyboard is its' mouth, the processor its' brain, and the monitor, its' eyes, and just like a person it can communicate with you (Glossbrenner 18). People did not comprehend this early on, and didn't take computers as seriously as they should have. In conclusion, throughout the years, people should have been more interested and involved in computers. Today, nearly everything is centered around them with their high speed capabilities getting even better with every day. They will continue to grow and become more advanced forever. f:\12000 essays\technology & computers (295)\The History of Intel Corporation.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ History of The Intel Corporation The Intel Corporation is the largest manufacturer of computer devices in the world. In this research paper I will discuss where, ehrn, and how Intel was founded, the immediate effects that Intel made on the market, their marketing strategies, their competition, and finally, what Intel plans to do in the future. Intel didn't just start out of thin air, it was created after Bob Noyce and Gordon Moore first founded Fairchild Semiconductor with six other colleagues. Fairchild Semiconductor was going pretty well for about ten years when Bob and Gordon decided to resign because they were tired of not being able to do things the way they wanted to; they proceeded to establish a new integrated cicuits electronics company. Gordon suggested that semiconductor memory looked promising enough to risk starting a new company. Intel was born. Intel made quite an impact on the industry soon after it was founded. The sales revenues jumped enormously through Intel's International exspansion to many countries including Europe and the Phillipines in the early 70's. From 1969 to 1970 Intel's revenues went up by almost four-million dollars! Today, Intel is one of the biggest companies pulling in billions and billions of dollars each year. Intel has had many factors over the years that has allowed it to monopolize the computer industry thus resulting in small competition. First of all, Intel is almost 25 years ahead of it's competitors. Therefore, most companies are just starting out and have little or no effect on Intel's sales. Another reason is obviously Intel's reputation. They have built up such a standard of excellence that when someone hears the word Intel they think high-quality. Intel's popularity, reputation, and revenues are a direct result of their marketing strategies. Again, one of the most important factors that has made Intel so sucessful is their reputation that has been built up since they started. The Intel Inside program which was launced in May of 1991 was a promotional campaign that placed the Intel Inside Logo on all computers containing the new 486 processor. Clever and effective advertising has also increased Intel's popularity. One of the most popular commercials advertising the Pentium processor shows a fly-through inside a computer then it scans down showing the Intel Logo on the processor. Intel definately has a very bright future ahead of them. By continually creating faster and more advanced processors and other computer components. They are always one step ahead of the competition which makes them a leader. f:\12000 essays\technology & computers (295)\The impact of AI on Warfare.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Impact of AI on Warfare. It is well known that throughout history man's favourite past time has been to make war. It has always been recognised that the opponent with the better weapons usually came out victorious. Nowadays, there is an increasing dependency, by the more developed nations, on what are called smart weapons and on the development of these weapons. The social impact of AI on warfare is something which needs to be considered very carefully for it raises many ethical and moral issues and arguments. The use of smart weapons raises many questions on the price paid to develop these weapons; money which could be used to solve most of the world's social problems such as poverty, hunger, etc. Another issue is the safety involved in the use of these weapons. Can we really make a weapon that does everything on its own without human help and are these weapons a threat to civilians? The main goal of this essay is to discuss whether it is justifiable to use AI in warfare and to what extent. The old time dream of making war bloodless by science is finally becoming a reality. The strongest man will not win, but the one with the best machines will. Modernising the weapons used in war has been an issue since the beginning. Nowadays, the military has spent billions of dollars perfecting stealth technology to allow planes to slip past enemy lines undetected. The technology involved in a complicated system such as these fighter planes is immense. The older planes are packed with high tech gear such as micro processors, laser guiding devices, electromagnetic jammers and infrared sensors. With newer planes, the airforce is experimenting with a virtual reality helmet that projects a cartoon like image of the battlefield for the pilot, with flashing symbols for enemy planes. What is more, if a pilot passes out for various reasons such as the "G" force from a tight turn, then a computer system can automatically take over while the pilot is disabled. A recent example of the use of Al in warfare is the Gulf War. In operation Desert Storm, many weapons such as 'smart' Bombs were used. These were highly complex systems which used superior guidance capabilities but they did not contain any expert systems or neural networks. The development of weapons which use highly complex systems has drastically reduced the number of human casualties in wartime. The bloodshed is minimised because of the accuracy of the computer systems used. This has been an advantage that has brought a lot of praise to the development of such sophisticated (not to mention expensive) weapons. More and more taxpayer's money is invested into research and development of weapons that may never be used. This is because the weapons are mostly for deterrent uses only and no country really wants to use them because of the power which they hold. The problem with using sophisticated computer systems in warfare is that the technology being used may fall into the wrong hands. But who is to say what are the wrong hands? Most people tend to think that if the technology is on their side, then it can not be misused. This has been proven to be false when in the Gulf War a whole battalion of British armoured vehicles were accidentally annihilated by an allied American stealth fighter which contained complex computer systems which were thought to be faultless. The major problem with the use of highly sophisticated weapons is the cost of development. The best solution to this problem has been found to be the fitting of old B52's with modern technology which is almost as good and gets the job done, all at a minute fraction of the price. The other problem arising from the issue is the control over the development and employment of such weapons. The solution to this problem would be an international control over development and use of weapons by independent organisations such as the United Nations. Also, associations can be formed in order to group all scientists who are involved in the development of the weapons in order to keep track of them. The use of the extremely high tech weapons should be reserved for cases where it is absolutely necessary. Although governments are eager to try out equipment on which they have spent millions and sometimes billions of taxpayer's money, the use of Al is showing proof that it is serving its ultimate purpose: to slowly move men farther and farther from the killing fields. f:\12000 essays\technology & computers (295)\The Interestingnet .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ With only 1000 or so networks in the mid 1980's, the Internet has become tremendous technological change to society over the past few years. In 1994, more than twenty-five million people gained access to the Internet (Groiler..). The Internet users are mainly from the United States of America and Europe, but other countries around the world will be connected soon as improvements of communication lines are made. The Internet originated in the United States Defense Department's ARPAnet (Advanced Research Project Agency, produced by the Pentagon) Project in 1969 (Krol). Military planners sought to design a computer networking system that could withstand an attack such as a nuclear war. In the 1980's, the National Science Foundation built five Superconductor Computer Centers to give several universities academic access to high powered computers formerly available to only the United State's military (Krol). The National Science Foundation then built its own network chaining more universities together. Later, the network connections were being used for purposes unrelated to the National Science Foundation's idea such as the universities sending electronic mail (today, it is understood as Email). The United States government then helped pushed the evolution of the Internet, calling the project: Information Super Highway (Groiler..). In the early 1990's the trend then boomed. Businesses soon connected to the Internet, and started using the Internet as a way of saving money through advertising products and electronic mailing (Abbot). Communications between different companies also arose due to the convenience of the Internet. Owners of personal computers soon became eager to connect to the Internet. Through a modem or Ethernet adapter (computer hardware devices that allow a physical connection to Internet), home computers can now be made to be accessible to the Internet (Groiler..). New Internet servers have evolved since the National Sciences Foundation's basic idea back in the 1980's (Krol). The majority of the home users subscribe to services such as Netscape, Prodigy, America Online, and CompuServe. These services are connected to the Internet and provide user-friendly access to the Internet for a reasonable monthly fee. These services are connected to a main server called the World Wide Web. The World Wide Web is a service that is defined as global international networking (Abbot). The Web makes all of the systems from other countries work together with compatibility. Thus, allowing the Internet to be internationally user-friendly. The United State's stock market has greatly benefited due to sudden interest and popularity in the Internet. Stock holders with share of Internet related companies have noticed a skyrocket in the prices over a short period of time. The Internet holds an endless amount of information. From Chia Pets, to vacation sites, to the anatomy of a bullfrog, the Internet covers information on and about anything. For example, I was very interested in the sport Broomball when I played my first game at Iowa State's Hockey Rink. Not knowing much about the newly experienced sport, my interests grew to find out more about it. Using my computer, I typed in "Broomball" into Netscape. To my surprise, forty sites that contained the word "Broomball" popped up, and I was able to find out much more about the sport. One of the sights that I visited happened to be down in Australia, another up in Canada! From there, I now know that Broomball leagues can be found all over Canada, and that Broomball was first invented in 1981. Millions of college student's lives have be effected because of the Internet. To college students, the Internet is a twenty-four hour library that can be accessed through various computer labs across campuses. To others, it is a way of electronically sending in homework, or sending a letter to a friend who is enrolled to a different college. It is also an exciting, growing spot to visit when boredom casts over. From obtaining information to Emailing, uses of the Internet can be endless for students. With my personal computer set up with Netscape service along with a thirty dollar Ethernet Card, I am able to browse through the Internet in my dorm room. I often Email friends at the University of Northern Iowa, to my cousin in Chicago, and to friends back in my hometown Dubuque. This is quite handy because I quickly found out that the cost of phone calls can be ridiculous, and the wait for a computer to free up in the labs to be quite frustrating. In a few computer science classes of mine, Project Vincent is a system used with the Internet during class. In class, we use it to gain entry into different programs and software. I also use it weekly to submit my Computer Science 227 programming class homework, which is handy because I do not have to leave my room in order to do homework. In my English class, we often head over to a computer center and discuss previous readings though networking. Here, we can join each other in group discussion individually logged onto computers at the same time. From my point of view, the Internet has drastically changes my life since my arrival at college. The lines once constructed for nuclear protection, have now proven to be a source of useful information and means of mass communication (Krol). The Internet aids education and makes the amount of resources endless. In the future, more and more colleges, high schools, and grade schools will soon be connecting to the Internet. Those who are currently connected, will definitely stay connected. From my point of view, the Internet will continually be the exciting road towards information and communication in the years to come. Works Cited: Abbot, Tony. On Internet 94: An International Guide to Journals, Newsletters, Texts, Discussion Lists, and Other Resources on the Internet. 1994. Krol, Edward. The Whole Internet. 1992. The 1995 Groiler Multimedia Encyclopedia. "Internet." 1995. f:\12000 essays\technology & computers (295)\The Internet 2.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Internet is a worldwide connection of thousands of computer networks. All of them speak the same language, TCP/IP, the standard protocol. The Internet allows people with access to these networks to share information and knowledge. Resources available on the Internet are chat groups, e-mail, newsgroups, file transfers, and the World Wide Web. The Internet has no centralized authority and it is uncensored. The Internet belongs to everyone and to no one. The Internet is structured in a hierarchy. At the top, each country has at least one public backbone network. Backbone networks are made of high speed lines that connect to other backbones. There are thousands of service providers and networks that connect home or college users to the backbone networks. Today, there are more than fifty-thousand networks in more than one-hundred countries worldwide. However, it all started with one network. In the early 1960's the Cold War was escalating and the United States Government was faced with a problem. How could the country communicate after a nuclear war? The Pentagon's Advanced Research Projects Agency, ARPA, had a solution. They would create a non-centralized network that linked from city to city, and base to base. The network was designed to function when parts of it were destroyed. The network could not have a center because it would be a primary target for enemies. In 1969, ARPANET was created, named after its original Pentagon sponsor. There were four supercomputer stations, called nodes, on this high speed network. ARPANET grew during the 1970's as more and more supercomputer stations were added. The users of ARPANET had changed the high speed network to an electronic post office. Scientists and researchers used ARPANET to collaborate on projects and to trade notes. Eventually, people used ARPANET for leisure activities such as chatting. Soon after, the mailing list was developed. Mailing lists were discussion groups of people who would send their messages via e-mail to a group address, and also receive messages. This could be done twenty-four hours a day. As ARPANET became larger, a more sophisticated and standard protocol was needed. The protocol would have to link users from other small networks to ARPANET, the main network. The standard protocol invented in 1977 was called TCP/IP. Because of TCP/IP, connecting to ARPANET by any other network was made possible. In 1983, the military portion of ARPANET broke off and formed MILNET. The same year, TCP/IP was made a standard and it was being used by everyone. It linked all parts of the branching complex networks, which soon came to be called the Internet. In 1985, the National Science Foundation (NSF) began a program to establish Internet access centered on its six powerful supercomputer stations across the United States. They created a backbone called NSFNET to connect college campuses via regional networks to its supercomputer centers. ARPANET officially expired in 1989. Most of the networks were gained by NSFNET. The others became parts of smaller networks. The Defense Communications Agency shut down ARPANET because its functions had been taken over by NSFNET. Amazingly, when ARPANET was turned off in June of 1990, no one except the network staff noticed. In the early 1990's the Internet experienced explosive growth. It was estimated that the number of computers connected to the Internet was doubling every year. It was also estimated that at this rapid rate of growth, everyone would have an e-mail address by the year 2020. The main cause of this growth was the creation of the World Wide Web. The World Wide Web was created at CERN, a physics laboratory in Geneva, Switzerland. The Web's development was based on the transmission of web pages over the Internet, called Hyper Text Transmission Protocol or HTTP. It is an interactive system for the dissemination and retrieval of information through web pages. The pages may consist of text, pictures, sound, music, voice, animations, and video. Web pages can link to other web pages by hypertext links. When there is hypertext on a page, the user can simply click on the link and be taken to the new page. Previously, the Internet was black and white, text, and files. The web added color. Web pages can provide entertainment, information, or commercial advertisement. The World Wide Web is the fastest growing Internet resource. The Internet has dramatically changed from its original purpose. It was formed by the United States government for exclusive use of government officials and the military to communicate after a nuclear war. Today, the Internet is used globally for a variety of purposes. People can send their friends an electronic "hello." They can download a recipe for a new type of lasagna. They can argue about politics on-line, and even shop and bank electronically in their homes. The number of people signing on-line is still increasing and the end it not in sight. As we approach the 21st century, we are experiencing a great transformation due to the Internet and the World Wide Web. We are breaking through the restrictions of the printed page and the boundaries of nations and cultures. You may not be aware of it, but the World Wide Web is currently transforming the world as we know it. You've probably heard a lot about the Internet and the World Wide Web, but you may not know what these terms mean and may be intimidated by this rapidly advancing field of science. If there is one aspect of this field that is advancing faster than any other, it is the ease with which this technology can be learned. The Internet, by definition is a "network of networks." That is, it is a world-wide network that links many smaller networks. The World Wide Web is a new subdivision of the Internet. The World Wide Web consists of computers (servers) all over the world that store information in a textual as well as a multimedia format. Each of these servers has a specific Internet address which allows users to easily locate information. Files stored on a server can be accessed in two ways. The first is simply by clicking on a link in a Web document (better known as a Web page) that points to the address of another document. The second way to locate a particular Web page is by typing the Universal Resource Locator (URL) of the page in your browser (the software interface used to navigate the World Wide Web). The URL of a page is the string of characters that appears in the Location: box at the top of your screen. Every Web page has a unique URL which begins with the letters "http://" that identify it as a Web page. This is the equivalent of the Internet address and tells the computer where to find the particular page you are looking for. The greatest advantage of producing information in HTML format, is that files may be linked to one another via hyperlinks (or links) within the documents. Links usually appear in a different color than the rest of the text on a Web page and are often underlined. Navigating the Web is as simple as clicking a mouse button. Clicking the mouse on a link tells the computer to go to another Internet location and display a specific file. Also, most Web browsers allow easy navigation of the Web by utilizing "Back" and "Forward" buttons that can trace your path around the Web. Links within Web pages aren't limited to just other Web pages. They can include any type of file at all. Some of the more common types of files found on the Web are graphics files, sound files, and files containing movie clips. These files can be run by different helper applications that the Web browser associates with files of that type. As a student, the Web can provide you with an enormous source of information pertaining to any area of academic interest. This can be especially useful when information is needed to write a term paper. Students can use one of the many Search Engines on the Web to locate information on virtually any topic, just by typing the topic that they wish to find information on. Another application many students find the World Wide Web to be useful for is career planning. There are hundreds of Web sites that contain information about job openings in every field all over the United States as well as abroad. Job openings can be found listed either by profession or by geographical location, so students don't have to waste time looking through job listings that don't pertain to their area of interest or location of preference. Alas, if students fail to find job openings they are interested in, they can post their resumes to employment service Web sites which try to match employers with those seeking employment. The Web can also be a useful place for high school students applying to college or college graduates who wish to delay their job hunt by going to graduate school. Many colleges and universities around the world are getting on the Internet to provide their students with access to the enormous amount of information available on it. This allows students the opportunity to browse Web servers at different colleges where they can find information useful in selecting the institution most appropriate to their academic needs. While the World Wide Web can provide information crucial to your academic and professional career, the information contained on it is not limited to such serious matters. The Web can also provide some entertaining diversions from academics. You can spend hours on the internet and it only feels like a couple minutes. A recent topic I have personally been looking into is three dimensional chat rooms. In this type of chat room you virtually walk around and approach other people and attempt to have a conversation with them. Unfortunatlely everyone is not always responsive as you would like them to be. As an avid user of the internet I highly recommend all people to look into "Worlds Chat". As the 21st century approaches, it seems inevitable that computer and telecommunications technology will radically transform our world in the years to come. The Internet and the World Wide Web, in particular, appear to be the protocol that will lead us into the Information Age. The social and political implications for this new technology are astounding. Never before has such an enormous amount of information been available to a limitless number of people. Already, issues of censorship and free-speech have come to take center stage, as the world scrambles to deal with the power of modern technology. The World Wide Web has already affected our educational, political, and commercial sectors, and it now seems poised to affect every other aspect of human life. The days where every home will have a computer are not far from the present. In order to keep up with the technology of the future, you need to catch up with the technology of the present. The easiest way to do this is to simply wander around the World Wide Web. It's as easy as clicking a mouse. So sit back and explore the World Wide Web at your own pace, and don't let yourself get left behind when the next technological breakthrough comes along. f:\12000 essays\technology & computers (295)\The Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Imagine talking about the latest elections with someone three thousand miles away without receiving a tremendous phone bill. Or sending a letter to a friend or relative and having it arrive one second later. How would it feel to know that any source of information is at your fingertips at the press of a button? All of these are possible and more with a system of networks all connected and sending information at light speed from place to place known as the Internet. This is a trend word for the nineties yet it has a background that spans all the way back to the sixties. The history of the Internet is a full one at that even though it has only been around for about 30 years. It has grown to be the greatest collection of networks in the world, its origins go back to 1962. In 1962 the original idea for this great network of computers sprung forth from a question "How could U.S. authorities successfully communicate after a nuclear war?" The answer came from the Rand Corporation, America's foremost Cold War think-tank. Why not create a network of computers without one central main authoritative unit (Sterling 1) The Rand Corporation working along side the U.S. Advanced Research Projects Agency (ARPA) devised a plan. The network itself would be considered unreliable at all times; therefore it would never become too dependable and powerful. Each computer on the network or node would have its own authority to originate, pass, and receive messages. The name given to this network was the ARPANET. To fully understand the ARPANET, an understanding of how a network works is needed. A network is a group of computers connected by a permanent cable or temporary phone line. The sole purpose of a network is to be able to communicate and send information electronically. The plan for the ARPANET was to have the messages themselves divided into packets, each packet separately addressed to be able to wind its way through the network on an individual basis. If one node was gone it would not matter, the message would find a way to another node. The idea was kicked around by MIT, UCLA, and RAND during the sixties. After the British setup a test network of this type, ARPA decided to fund a larger project in the USA. The first university to receive a node called an Interface Message Processor for this network was UCLA around Labor Day, marking September 1, 1969 the birth date of the Internet as we know it today (Cerf 1). The next university was Stanford Research Institute (SRI) then UC Santa Barbara (UCSB), and finally University of Utah (Cerf 1). The original computers used to connect to the ARPANET were consider super computers of the time. Science Data Systems (SDS) Sigma 7 was the name of the original computer at UCLA (Cerf 1). Each one of the computers connected to each other at a speed of about 400,000 bytes per second or 400 kbps over a dedicated line, which was fast at the time. Originally they connected using a protocol, "Network Control Protocol", or NCP but as time passed and the technology advanced, NCP was superseded by the protocol used by most Internet users today TCP/IP (Sterling 2). TCP or Transmission Control Protocol converts the message into streams of packets at the source, then reassembles them back into messages at the destination. IP, or Internet Protocol handles the addressing, seeing to it that packets are routed across multiple nodes and even across multiple networks with multiple standards not only ARPA's. This protocol came into use around 1977 (Zakon 5). In 1969 there existed 4 nodes, in 1971 there were 15, and in 1972 there were 37 nodes. This exponential growth has continued even today in 1996 there are about 5.3 million nodes connected to the Internet (Zakon 14). The number of people, however, is estimated because the number of people connected to any one network varies. The amount of content over the Internet is estimated at about 12,000,000 web pages. As the numbers grew and grew the military finally dropped out in 1983 and formed MILNET. The ARPANET also dawned a new name in 1989; it became known as the Internet. The ARPANET was not the only network of this time. Companies had their own Local Area Network or LAN and Ethernet. LANs usually have one main server and several computers connected to that server, such as the computer lab at Prep. The server usually has a large hard drive and possibly share a printer. The computers connected to the server generally have a microprocessor and maybe a small hard drive. All the important software is shared from the server. An Ethernet on the other hand, is similar to a LAN but the connecting cable is large and enables other computers on the network to be up to 1000ft. away. The speed of an Ethernet is faster than a regular LAN its base speed is 10Mbps. To put this in perspective it is more than 300% more faster than a regular modem traveling at 28.8kbps. Each of these types of networks connected to the Internet through their own dedicated node. There is no government regulating the Internet, it is anarchy in its greatest form. The Internet's "anarchy" may seem strange, but it makes a certain deep and basic sense. It's rather like the "anarchy" of the English language. Nobody rents or owns English. As an English-speaking person, it's up to you to learn how to speak English properly and use it however you want. Though many people earn their living from using, exploiting, and teaching English, "English" as an institution is public property. Much the same goes for the Internet. Would the English language be improved if there was an English Language Co.? There'd probably be far fewer new words in English, and fewer new ideas. People on the Internet feel the same way about their institution. It's an institution that resists institutionalization. The Internet belongs to everyone and no one (Sterling 4). Our government and many others are attempting to regulate material on the Internet. The Telecommunications Act that passed about a year ago which included the Communication decency act (CDA), put a few rules not on the Internet but on the people who own computers connected to the Internet, such as child pornography. It is illegal to post on any website anywhere. This Act was ruled unconstitutional by the Supreme Court. Other governments have tried to put limitations on the Internet and some have even succeeded. China requires users and ISPs to register with police. Germany cut off access to some newsgroups carried on CompuServe. This ban was lifted due to protest. Saudi Arabia confines Internet Access to universities and hospitals. Singapore requires political and religious content to register with the state. New Zealand classifies computer disks as "publications" that can be censored and seized (Zakon 14). On November 1 the New York state senate passed a bill which, barring a constitutional challenge, made speech that is "harmful to minors" punishable as a felony. Ann Beeson, chief cyberlitigator for the American Civil Liberty Union (ACLU), said "The law will show how nonsensical state regulation of the Internet is. It will affect online users not just in New York, but throughout the world. In addition to violating the First Amendment the law violates the commerce clause because it regulates the actions of the online community even wholly outside the state of New York." This trend is not only limited to New York. In 1995 and '96, 11 states passed laws that somehow censor speech on the Internet. They restrict everything from soliciting minors for online sex (North Carolina) to prohibiting college professors from using university-sponsored Internet resources to view sexually explicit material (Virginia). The ACLU has the Internet's biggest defense in cases such as the CDA. With over 2 million servers connected to the Internet there is always something to do online. In fact this is a major problem for some people. They spend so much time in cyberspace they forget how to interact with other people, and their social skills deteriorate. A person like this is known as a net addict. A common question asked is "What is on the Internet that is so addicting?" One possible answer to this question is online a person can gain a false sense of reality. A person can be anyone they want to be online. This attraction alone is enough for a person to give up reality altogether. This statement can be debated, but if the choice had to be made between an ideal person or the regular person, which would be chosen more often? One of the many attractions to the Internet is electronic mail (E-mail), faster by several orders of magnitude than the U.S. mail, which is known by Internet regulars as "snail-mail." Internet mail is like a fax, it is electronic text written then sent from the computer over the phone line to the Internet Service Provider (ISP). The ISP then routes the mail to its destination. One piece of e-mail may go over 1000 computers bouncing of each one before it reaches its destination. This process takes place all in a matter of seconds depending on your letters length and if you have a file attached. New forms of e-mail are being developed such as voice mail and video mail; both these exist and require special hardware and software. They also take longer to send and receive. One of the first features on the ARPANET then Internet were discussion groups. These discussion groups or "newsgroups" as they are more commonly known, are a world of their own. This world of news, debate, and argument is generally known as USENET. The Internet and USENET are quite different. USENET is rather like an enormous billowing crowd of gossipy, news-hungry people, wandering in and through the Internet on their way to various private backyard barbecues (Sterling 4). At any moment or time there are over 28,000 separate newsgroups on USENET, and the discussions generate about 7 million words of typed commentary every single day (Sterling 4). All USENET newsgroups are organized by hierarchies and given prefix names such as: alt (alternative), rec (recreation), comp (computers), misc (miscellaneous), and soc (society). These were the top five newsgroup hierarchies in 1996 (Georgia 206). USENET is the focus of most of the censorship because this is were much of the pornography is view. It is uncontrollable because a newsgroup can be created at anytime with out regulation or supervision. 7.6% of all the newsgroups deal with adult oriented material. It may be a small number yet it has been blown out of proportion by media and the like. The main use of the Internet is using a browser such as Netscape or Internet Explorer to view web pages. The trendy word for this is "surfing" or for some people with slow connections, "crawling". To view a web page the user types in the desired address and then magically it appears on screen; This is the description most users give when asked to explain the Internet. Underneath that there are complex commands telling the computer what to send and receive, what data is given out and who is denied or accepted. The process begins with typing in the address the usual Http://www... and so on. The http stands for HyperText Tranfer Protocol which tells the computer which protocol to use over the World Wide Web (www). When the user enters an address such as http://www.microsoft.com it sends the request over the www to find Microsoft's web server. The .com section specifies that this is a commercial site; other suffixes include .edu (education), .mil (military), .gov (government) , and .net (external network). When the user accesses Microsoft's site they can explorer Microsoft's computer by clicking on hyperlinks which are links to other pages. When viewing specific sites they normally are labeled .htm or .html; these are acronyms for hypertext markup language which is the programming language in which most webpages are made. All these elements combined are what most people consider the Internet. The Internet is so vast and huge, a person could spend 24 hours a day 7 days a week 365 days a year and more online and never see all of it. The amount of information on the Internet is over several trillion (tera) bytes. To put this into perspective that is over 600000 floppy disks. With all that information it is so easy to loose track of your target and waste time. Sometimes there is multiple tasks needed to be done and once an interesting site is found one hyperlink leads to another, one hour turns into three and the rest of the world is put on hold. Other times a blank screen can sit there and nothing can be thought of to visit or learn about. The Internet is great if a person has 3 or 4 hours to kill. One tip on how to limit time online is: download a timer that disconnects if a time limit has been passed. These programs usually know what day it is and allow only so much time online per day. Self discipline is another method to; train yourself to get up and leave. The consequence of being online for long periods of time is large access bill from the ISP. Day by day the Internet grows. Some people are predicting a crash because of the excessive traffic online and the limited capabilities of the servers that are visited. AOL did crash for 15 hours several months ago and the question was raised "Can our servers handle the traffic?" The answer though is in the future. As the Internet progresses so does technology. Every 5 months newer computers are released and the computers released 5 months earlier go out of date. The technological forecast call for Virtual Reality Markup Language (VRML) in the near future. This enables the user to explorer in 3D. Imagine walking through the Sistine chapel while sitting in an office in Spokane. Many ask "What does the future of the Internet hold?" the answer only time will tell. f:\12000 essays\technology & computers (295)\The MouseComputer.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Mouse The computer mouse is a common pointing device, popularized by its inclusion as standard equipment with the Apple Macintosh. With the rise in popularity of graphical user interfaces in MS-DOS; UNIX, and OS/2, use of mice is growing throughout the personal computer and workstation worlds. The basic features of a mouse are a casing with a flat bottom, designed to be gripped by one hand; one or more buttons on the top; a multidirectional detection device (usually a ball) on the bottom; and a cable connecting the mouse to the computer. By moving the mouse on a surface (such as a desk), the user controls an on-screen cursor. A mouse is a relative pointing device because there are no defined limits to the mouse's movement and because its placement on a surface does not map directly to a specific screen location. To select items or choose commands on the screen, the user presses one of the mouse's buttons, producing a "mouse click." f:\12000 essays\technology & computers (295)\The Necessity of Computer Security.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Necessity Of Computer Security When the first electronic computers emerged from university and military laboratories in the late 1940s and early 1950s, visionaries proclaimed them the harbingers of a second industrial revolution that would transform business, government and industry. But few laymen, even if they were aware of the machines, could see the connection. Experts too, were sceptical. Not only were computers huge, expensive, one-of-a-kind devices designed for performing abstruse scientific and military calculations, such as cracking codes and calculations missile trajectories, they were also extremely difficult to handle. Now, it is clear that computers are not only here to stay, but they have a profound effect on society as well. As John McCarthy, Professor of Computer Science at Stanford University, speculated in 1966: "The computer gives signs of becoming the contemporary counterpart of the steam engine that brought on the industrial revolution - one that is still gathering momentum and whose true nature had yet to be seen." Today's applications of computers are vast. They are used to run ordinary household appliances such as televisions and microwaves, to being tools in the workplaces through word processing, spreadsheets, and graphics software, to running monumental tasks such as being the heart and soul of the nations tax processing department, and managing the project timetables of the Space Shuttle. It is obvious that the computer is now and always will be inexorably linked to our lives, and we have no choice but to accept this technology and learn how to harness its total potential. With any progressing technology, an unauthorized application can almost be found for it. A computer could and has been used for theft and fraud - for example, as a database and manager of illegal activities such as drug trafficking and pornography. However, we must not just consider the harmful applications of the computer, but also take into account the good that they have caused. When society embraced the computer technology, we have to treat this as an extension of what we already have at hand. This means that some problems that we had before the computer era may also arise now, in the form where computers are an accessory to a crime. One of the problems that society has faced ever since the dawn of civilization is privacy. The issue of privacy on the Internet has risen many arguments for and against having it. The issue of privacy has gotten to the point where the government of the United States has placed a bill promoting a single chip to encrypt all private material on the Internet. Why is privacy so important? Hiding confidential material from intruders does not necessarily mean that what we keep secret it illegal. Since ancient times, people have trusted couriers to carry their messages. We seal out messages in a envelope when sending mail through the postal service. Using computer and encrypting programs to transfer electronic messages securely is not different from sending a letter the old-fashioned way. This paper will examine the modern methods of encrypting messages and analyse why Phil Zimmerman created an extremely powerful civilian encipherment program, called the PGP, for "Pretty Good Privacy." In particular, by focusing on cryptography, which was originally intended for military use, this paper will examine just how easy it is to conclude why giving civilians a military-grade encrypting program such as the PGP may be dangerous to national security. Therefore, with any type of new technology, this paper will argue that the application of cryptography for civilian purposes is not just a right, but is also a necessity. Increasingly in today's era of computer technology, not only banks but also businesses and government agencies are turning to encryption. Computer security experts consider it best and most practical way to protect computer data from unauthorized disclosure when transmitted and even when stored on a disk, tape, of the magnetic strip of a credit card. Two encryption systems have led the way in the modern era. One is the single-key system, in which data is both encrypted and decrypted with the same key, a sequence of eight numbers, each between 0 and 127. The other is a 2-key system; in this approach to cryptography, a pair of mathematically complementary keys, each containing as many as 200 digits, are used for encryptions and decryption. In contrast with ciphers of earlier generations, where security depended in part on concealing the algorithm, confidentiality of a computer encrypted message hinges solely on the secrecy of the keys. Each system is thought to encrypt a message so inscrutably that the step-by-step mathematical algorithms can be made public without compromising security. The single key system, named the Data Encryption Standard - DES for short - was designed in 1977 as the official method for protecting unclassified computer data in agencies of the American Federal government. Its evolution began in 1973 when the US National Bureau of Standards, responding to public concern about the confidentiality of computerized information outside military and diplomatic channels, invited the submission of data-encryption techniques as the first step towards an encryption scheme intended for public use. The method selected by the bureau as the DES was developed by IBM researchers. During encryption, the DES algorithm divides a message into blocks of eight characters, then enciphers them one after another. Under control of the key, the letters and numbers of each block are scrambled no fewer than 16 times, resulting in eight characters of ciphertext. As good as the DES is, obsolescence will almost certainly overtake it. The life span of encryption systems tends to be short; the older and more widely used a cipher is, the higher the potential payoff if it is cracked, and the greater the likelihood that someone has succeeded. An entirely different approach to encryption, called the 2-key or public-key system, simplifies the problem of key distribution and management. The approach to cryptography eliminates the need for subscribers to share keys that must be kept confidential. In a public-key system, each subscriber has a pair of keys. One of them is the so-called public key, which is freely available to anyone who wishes to communicate with its owner. The other is a secret key, known only to its owner. Though either key can be used to encipher or to decipher data encrypted with its mate, in most instances, the public key is employed for encoding, and the private key for decoding. Thus, anyone can send a secret message to anyone else by using the addressee's public key to encrypt its contents. But only the recipient of the message can make sense of it, since only that person has the private key. A public key cryptosystem is called the PGP, for Pretty Good Privacy. Designed by Phil Zimmerman, this program is freely distributed for the purpose of giving the public the knowledge that whatever communications they pass, they can be sure that it is practically unbreakable. PGP generates a public and private key for the user using the RSA technique. The data is then encrypted and decrypted with the IDEA algorithm - which is similar to the DES, but the work factor to decode the encrypted message by brute force is much higher than what the DES could provide. The reason why the RSA is used only when generating the keys is that the RSA takes a very long time to encrypt an entire document, where using the RSA on the keys takes a mere fraction of the time. At this time, Zimmerman is bing charged by the US government for his effort in developing the PGP. The government considers encryption as a weapon, and they have established regulations controlling or prohibiting the export of munitions. Since the PGP is a powerful encryption program, it is considered and can be used as a powerful weapon and may be a threat to national security. On the Internet, it is clear that many people all over the world are against the US government's effort on limiting the PGP's encryption capabilities, and their reason is that the ban infringes on the people's right to privacy. The PGP must not be treated only as a weapon, for it contains analogies that are not used in wartime. One of them is authentication. The two-key cryptosystem is designed with authentication in mind: Using someone's public key to encrypt enables only the owner of the private key to decrypt the same message. In the real world, we use our own signature to prove out identity in signing cheques or contracts. There exists retina scanners that check the blood vessels in out eyes, as well as fingerprint analysis devices. These use our physical characteristics to prove our identity. A digital signature generated by a public key cryptosystem is much harder to counterfeit because of the mathematics of factoring - which is an advantage over conventional methods of tests for out identity. Another analogy the PGP has with the real world is the need for security. Banks and corporations employ a trusted courier - in the form of an armoured truck or a guard - to transfer sensitive documents or valuables. However, this is expensive for civilian purposes, and the PGP provides the same or better security when securing civilian information. While many argue that limiting the PGP's abilities are against the people's right to privacy, the PGP must also be seen as a necessity as we enter the Information Age. There is currently little or no practical and inexpensive way to secure digital information for civilians, and the PGP is an answer to this problem. Computer privacy must not be treated differently than any other method to make private any documents. Rather, we must consider the computer as a tool and use it as an extension of society's evolution. Clearly the techniques we employ for computer privacy such as encryption, secure transfers and authentication closely mirrors past efforts at privacy and non-criminal efforts. The government is putting more pressure against the distribution of PGP outside of the United States. One of their main reasons was that since it is freely distributed and thus can be modified in such a way that even the vast computational resources of the US government cannot break the PGP's secured message. The government could now reason that the PGP can provide criminal organizations a means of secure communications and storage of their activities, and thus make the law enforcement's job much harder in tracking criminals down and proving them guilty. Also, we must never forget one of out basic human rights - one that many laid their lives for, is freedom. We have the freedom to do anything we wish that is within the law. The government is now attempting to pass a bill promoting a single algorithm to encrypt and decrypt all data that belongs to its citizens. A multitude of people around the world are opposed to this concept, arguing that it is against their freedom and their privacy. f:\12000 essays\technology & computers (295)\The Office of Tomorrow.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Office of Today In an increasing number of companies, traditional office space is giving way to community areas and empty chairs as employees work from home, from their cars or from virtually anywhere. Advanced technologies and progressive HR strategies make these alternative offices possible. Imagine it's 2 o'clock on a Wednesday afternoon. Inside the dining room of many nationwide offices, Joe Smith, manager of HR, is downing a sandwich and soda while wading through phone and E-mail messages. In front of him is a computer-equipped with a fax-modem-is plugged into a special port on the dining table. The contents of his briefcase are spread on the table. As he sifts through a stack of paperwork and types responses into the computer, he periodically picks up a cordless phone and places a call to a colleague or associate. As he talks, he sometimes wanders across the room. To be sure, this isn't your ordinary corporate environment. Smith doesn't have a permanent desk or workspace, nor his own telephone. When he enters the ad agency's building, he checks out a portable Macintosh computer and a cordless phone and heads off to whatever nook or cranny he chooses. It might be the company library, or a common area under a bright window. It could even be the dining room or Student Union, which houses punching bags, televisions and a pool table. Wherever he goes, a network forwards mail and phone pages to him and a computer routes calls, faxes and E-mail messages to his assigned extension. He simply logs onto the firm's computer system and accesses his security-protected files. He is not tethered to a specific work area nor forced to function in any predefined way. Joe Smith spends mornings, and even sometimes an entire day, connected from home via sophisticated voicemail and E-mail systems, as well as a pager. His work is process and task-oriented. As long as he gets everything done, that's what counts. Ultimately, his productivity is greater and his job-satisfaction level is higher. And for somebody trying to get in touch with him, it's easy. Nobody can tell that Joe might be in his car or sitting at home reading a stack of resumes in his pajamas. The call gets forwarded to him wherever he's working. You've just entered the vast frontier of the virtual office-a universe in which leading-edge technology and new concepts redefine work and job functions by enabling employees to work from virtually anywhere. The concept allows a growing number of companies to change their workplaces in ways never considered just a few years ago. They're scrapping assigned desks and conventional office space to create a bold new world where employees telecommute, function on a mobile basis or use satellite offices or communal work areas that are free of assigned spaces with personal nick nacks. IBM, AT&T, Travelers Corporation, Pacific Bell, Panasonic, Apple Computer and J.C. Penney are among the firms recognizing the virtual-office concept. But they're just a few. The percentage of U.S. companies that have work-at-home programs alone has more than doubled in the past five years, from 7% in 1988 to 18% today. In fact, New York-based Link Resources, which tracks telecommuting and virtual-office trends, has found that 7.6 million Americans now telecommute-a figure that's expected to swell to 25 million by the year 2000. And if you add mobile workers-those who use their cars, client offices, hotels and satellite work areas to get the job done-there's an estimated 1 million more virtual workers. Both companies and employees are discovering the benefits of virtual arrangements. Businesses that successfully incorporate them are able to slash real-estate costs and adhere to stringent air-quality regulations by curtailing traffic and commuters. They're also finding that by being flexible, they're more responsive to customers, while retaining key personnel who otherwise might be lost to a cross-country move or a newborn baby. And employees who successfully embrace the concept are better able to manage their work and personal lives. Left for the most part to work on their own terms, they're often happier, as well as more creative and productive. Of course, the basic idea of working away from the office is nothing new. But today, high-speed notebook computers, lightning-fast data modems, telephone lines that provide advanced data-transmission capabilities, portable printers and wireless communication are starting a quiet revolution. As a society, we're transforming the way we work and what's possible. It's creating tremendous opportunities, but it also is generating a great deal of stress and difficulty. There are tremendous organizational changes required to make it work. As markets have changed-as companies have downsized, streamlined and restructured-many have been forced to explore new ways to support the work effort. The virtual office, or alternative office, is one of the most effective strategies for dealing with these changes. Of course, the effect of alternative officing on the HR function is great. HR must change the way it hires, evaluates employees and terminates them. It must train an existing work force to fit into a new corporate model. There are issues involving benefits, compensation and liability. And, perhaps most importantly, there's the enormous challenge of holding the corporate culture together-even if employees no longer spend time socializing over the watercooler or in face-to-face meetings. When a company makes a commitment to adopt a virtual-office environment-whether it's shared work-space or basic telecommuting-it takes time for people to acclimate and adjust. If HR can't meet the challenge, and employees don't buy in, then the program is destined to fail. Virtual offices break down traditional office walls. Step inside one and you quickly see how different an environment the concept has created. Gone are the cubicles in which employees used to work. In their place are informal work carrels and open areas where any employee-whether it's the CEO or an administrative assistant-can set up shop. Teams may assemble and disperse at any given spot, and meetings and conferences happen informally wherever it's convenient. Only a handful of maintenance workers, phone operators and food-services personnel, whose flexibility is limited by their particular jobs, retain any appearance of a private workspace. Equally significant is the fact that on any given hour of any day, as many as one-third of the salaried work force aren't in the office. Some are likely working at a client's site, others at home or in a hotel room on the road. The feeling is that the employees of Virtual Offices are self-starters. The work environment is designed around the concept that one's best thinking isn't necessarily done at a desk or in an office. Sometimes, it's done in a conference room with several people. Other times it's done on a ski slope or driving to a client's office. Fonders of the concept wanted to eliminate the boundaries about where people are supposed to think. They wanted to create an environment that was stimulating and rich in resources. Employees decide on their own where they will work each day, and are judged on work produced rather than on hours put in at the office. One company that has jumped headfirst into the virtual-office concept is Armonk, New York-based International Business Machine's Midwest division. The regional business launched a virtual-office work model in the spring of 1993 and expects 2,500 of its 4,000 employees-salaried staff from sales, marketing, technical and customer service, including managers-to be mobile by the beginning of 1995. Its road workers, equipped with IBM Think Pad computers, fax-modems, E-mail, cellular phones and a combination of proprietary and off-the-shelf software, use their cars, client offices and homes as work stations. When they do need to come into an office-usually once or twice a week-they log onto a computer that automatically routes calls and faxes to the desk at which they choose to sit. So far, the program has allowed Big Blue's Midwest division to reduce real-estate space by nearly 55%, while increasing the ratio of employees to workstations from 4-to-1 to almost 10-to-1. More importantly, it has allowed the company to harness technology that allows employees to better serve customers and has raised the job-satisfaction level of workers. A recent survey indicated that 83% of the region's mobile work force wouldn't want to return to a traditional office environment. IBM maintains links with the mobile work force in a variety of ways. All employees access their E-mail and voicemail daily; important messages and policy updates are broadcast regularly into the mailboxes of thousands of workers. When the need for teleconferencing arises, it can put hundreds of employees on the line simultaneously. Typically, the organization's mobile workers link from cars, home offices, hotels, even airplanes. Virtual workers are only a phone call away. To be certain, telephony has become a powerful driver in the virtual-office boom. Satellites and high-tech telephone systems, such as ISDN phone lines, allow companies to zap data from one location to another at light speed. Organizations link to their work force and hold virtual meetings using tools such as video-conferencing. Firms grab a strategic edge in the marketplace by providing workers with powerful tools to access information. Consider Gemini Consulting, a Morristown, New Jersey-based firm that has 1,600 employees spread throughout the United States and beyond. A sophisticated E-mail system allows employees anywhere to access a central bulletin board and data base via a toll-free phone number. Using Macintosh Powerbook computers and modems, they tap into electronic versions of The Associated Press, Reuters and The Wall Street Journal, and obtain late-breaking news and information on clients, key subjects, even executives within client companies. And that's just the beginning. Many of the firm's consultants have Internet addresses, and HR soon will begin training its officeless work force via CD-ROM. It will mail disks to workers, who will learn on their own schedule using machines the firm provides. The bottom line of this technology? Gemini can eliminate the high cost of flying consultants into a central location for training. Today, the technology exists to break the chains of traditional thought and the typical way of doing things. It's possible to process information and knowledge in dramatically different ways than in the past. That can mean that instead of one individual or a group handling a project from start to finish, teams can process bits and pieces. They can assemble and disassemble quickly and efficiently. Some companies, such as San Francisco-based Pacific Bell, have discovered that providing telecommuters with satellite offices can further facilitate efficiency. The telecommunications giant currently has nearly 2,000 managers splitting time between home and any of the company's offices spread throughout California. Those who travel regularly or prefer not to work at home also can drop into dozens of satellite facilities that each are equipped with a handful of workstations. At these centers, they can access exclusive data bases, check E-mail and make phone calls. Other firms have pushed the telecommuting concept even further. One of them is Great Plains Software, a Fargo, North Dakota-based company that produces and markets PC-based accounting programs. Despite its remote location, the company retains top talent by being flexible and innovative. Some of its high-level managers live and work in such places as Montana and New Jersey. Even its local employees may work at home a few days a week. Lynne Stockstad's situation at Great Plains demonstrates how a program that allows for flexible work sites can benefit both employer and worker. The competitive-research specialist had spent two years at Great Plains when her husband decided to attend chiropractic college in Davenport, Iowa. At most firms, that would have prompted Stockstad to resign-something that also would have cost the company an essential employee. Instead, Stockstad and Great Plains devised a system that would allow her to telecommute from Iowa and come to Fargo only for meetings when absolutely necessary. Using phone, E-mail, voicemail and fax, she and her work team soon found they were able to link together, and complete work just as efficiently as before. Today, with her husband a recent graduate, Stockstad has moved back to Fargo and has received a promotion. Great Plains uses similar technology in other innovative ways to build a competitive advantage. For example, it has developed a virtual hiring process. Managers who are spread across the country conduct independent interviews with candidates, and then feed their responses into the company's computer. Later, the hiring team holds a meeting, usually via phone or videoconferencing, to render a verdict. Only then does the firm fly the candidate to Fargo for the final interview. HR must lay the foundation to support a mobile work force. Just as a cafeteria offers a variety of foods to suit individual taste and preferences, the workplace of the future is evolving toward a model for which alternative work options likely will become the norm. One person may find that telecommuting four days a week is great; another may find that he or she functions better in the office. The common denominator for the organization is: How can we create an environment in which people are able to produce to their maximum capabilities? Creating such a model and making it work is no easy task, however. Such a shift in resources requires a fundamental change in thinking. And it usually falls squarely on HR's shoulders to oversee the program and hold the organization together during trying times. When a company decides to participate in an alternative officing program, people need to adapt and adjust to the new manners. Workers are used to doing things a certain way. Suddenly, their world is being turned upside down. One of the biggest problems is laying the foundation to support such a system. Often, it's necessary to tweak benefits and compensation, create new job descriptions and methods of evaluation and find innovative ways to communicate. Sometimes, because companies are liable for their workers while they're "on the clock," HR must send inspectors to home offices to ensure they're safe. When Great Plains Software started its telecommuting program in the late 1980s, it established loose guidelines for employees who wanted to be involved in the program. they pretty much implemented policies on an unscientific basis. Over time, the company has evolved to a far more stringent system of determining who qualifies and how the job is defined. For example, as with most other companies that embrace the virtual-office concept, Great Plains stipulates that only salaried employees can work in virtual offices because of the lack of a structured time schedule and the potential for working more than eight hours a day. Those employees who want to telecommute must first express how the decision will benefit the company, the department and themselves. Only those who can convince a hiring manager that they meet all three criteria move on to the next stage. Potential telecommuters then must define how they'll be accountable and responsible in the new working model. Finally, once performance standards and guidelines have been created, Great Plains presents two disclaimers to those going virtual. If their performance falls below certain predetermined standards, management will review the situation to determine whether it's working. And if the position changes significantly and it no longer makes sense to telecommute, management will have to reevaluate. Other companies have adopted similar checks and balances. They are training HR advisers to make accommodations for the individual, but to not make accommodations for the person's job responsibilities. IBM provides counseling from behavioral scientists and offers ongoing assistance to those having trouble adapting to the new work model. By closely monitoring preestablished sales and productivity benchmarks, managers quickly can determine if there's a problem. So far, only approximately 10% to 15% of its mobile work force has required counseling, and only a handful of employees have had to be reassigned. Virtual workers need guidance from HR. Not everyone is suited to working in a virtual-office environment. Not only must workers who go mobile or work at home learn to use the technology effectively, but they also must adjust their workstyle and lifestyle. The more you get connected, the harder it is to disconnect. At some point, the boundaries between work and personal life blur. Without a good deal of discipline, the situation can create a lot of stress. Managers often fear that employees will not get enough work done if they can't see them. Most veterans of the virtual office, however, maintain that the exact opposite is true. All too often, employees wind up fielding phone calls in the evening or stacking an extra hour or two on top of an eight-hour day. Not surprisingly, that can create an array of problems, including burnout, errors and marital conflict. IBM learned early on that it has to teach employees to remain in control of the technology and not let it overrun their lives. One of the ways it achieves the goal is to provide its mobile work force with two-line telephones. That way, employees can recognize calls from work, switch the ringer off at the end of the workday and let the voicemail system pick up calls. Another potential problem with which virtual employees must deal is handling all the distractions that can occur at home. As a result, many firms provide workers with specific guidelines for handling work at home. It is expected that those who work at home will arrange child care or elder care. And although management recognizes there are times when a babysitter falls through or a problem occurs, if someone's surrounded by noisy children, it creates an impression that the individual isn't working or is distracted. Still, most say that problems aren't common. The majority of workers adjust and become highly productive in an alternative office environment. The most important thing for a company to do is lay out guidelines and suggestions that help workers adapt. At many firms, including IBM, HR now is providing booklets that cover a range of topics, including time management and family issues. Many companies also send out regular mailings that not only provide tips and work strategies but also keep employees informed of company events and keep them ingrained in the corporate culture. This type of correspondence also helps alleviate workers' fears of isolation. IBM goes one step further by providing voluntary outings, such as to the Indianapolis 500, for its mobile work force. Even without these events, virtual workers' isolation fears often are unproven. The level of interaction in a virtual office actually can be heightened and intensified. Because workers aren't in the same place every day, they may be exposed to a wider range of people and situations. And that can open their eyes and minds to new ideas and concepts. However, dismantling the traditional office structure can present other HR challenges. One of the most serious can be dealing with issues of identity and status. Workers who've toiled for years to earn a corner office suddenly can find themselves thrown into a universal work pod. Likewise, photographs and other personal items often must disappear as workspace is shared. But solutions do exist. For instance, when IBM went mobile, top executives led by example. They immediately cleared out their desks and began plugging in at common work pods. Not surprisingly, one of the most difficult elements in creating a virtual office is dealing with this human side of the equation. The human factor can send shock waves reverberating through even the most sober organization. This challenge requires HR to become a active business partner. That means working with other departments, such as real estate, finance and information technology. It means creating the tools to make a virtual office work. In some cases, that may require HR to completely rewrite a benefits package to include a $500 or $1,000-a-month pay for those working at home. That way, the company saves money on real-estate and relocation costs, while the employee receives an incentive that can be used to furnish a home office. Management also must change the way supervisors evaluate their workers. Managers easily can fall into the trap of thinking that only face-to-face interaction is meaningful and may pass over mobile workers for promotions. Great Plains has gone to great lengths to ensure that its performance-evaluation system functions in a virtual environment. The company asks its managers to conduct informal reviews quarterly with telecommuting employees, and formal reviews every six months. By increasing the interaction and discussion, the company has eliminated much of the anxiety for employees-and their managers-while providing a better gauge of performance. In the final analysis, the system no longer measures good citizenship and attendance, but how much work people actually get done and how well they do it. Still, many experts point out that too much reliance on voicemail and E-mail can present problems. Although instantaneous messaging is convenient and efficient, it can overload virtual workers with too much information and not enough substance. Without some human interaction it's impossible to build relationships and a sense of trust within an organization. Sending workers offsite can boost productivity, while saving costs. Those who have embraced the virtual office say that it's a concept that works. At Pacific Bell, which began experimenting with telecommuting during the 1984 Summer Olympics in Los Angeles, employees routinely have reported 100% increases in productivity. Equally important: this fits into family and flexibility issues and that they enjoy working for the company more than ever before. Although the final results aren't yet in, IBM's mobile work force reports a 10% boost in morale and appears to be processing more work, more efficiently. What's more, its customers have so far reported highly favorable results. People are happier and more productive because they can have breakfast with their family before they go off to client meetings. They can go home and watch their child's soccer game and then do work in the evening. They no longer are bound by a nine-to-five schedule. The only criterion is that they meet results. Society is on the frontier of a fundamental change in the way the workplace is viewed and how work is handled. In the future, it will become increasingly difficult for traditional companies to compete against those embracing the virtual office. Companies that embrace the concept are sending out a loud message. They're making it clear that they're interested in their employees' welfare, that they're seeking a competitive edge, and that they aren't afraid to rethink their work force for changing conditions. Those are the ingredients for future success. f:\12000 essays\technology & computers (295)\The Origins of the Computer.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ This report is to be distributed freely and not to be sold for profit ect. This report can be modifyed as long as you keep in mind that you didn't write it. And you are not to hand in this report claiming credit for it heheh. The Roman Empire, founded by Augustus Caesar in 27 B.C. and lasting in Western Europe for 500 years, reorganized for world politics and economics. Almost the entirety of the civilized world became a single centralized state. In place of Greek democracy, piety, and independence came Roman authoritarianism and practicality. Vast prosperity resulted. Europe and the Mediterranean bloomed with trading cities ten times the size of their predecessors with public amenities previously unheard of courts, theaters, circuses, and public baths. And these were now large permanent masonry buildings as were the habitations, tall apartment houses covering whole city blocks. This architectural revolution brought about by the Romans required two innovations: the invention of a new building method called concrete vaulting and the organization of labor and capital on a large scale so that huge projects could be executed quickly after the plans of a single master architect. Roman concrete was a fluid mixture of lime and small stones poured into the hollow centers of walls faced with brick or stone and over curved wooden molds, or forms, to span spaces as vaults. The Mediterranean is an active volcanic region, and a spongy, light, tightly adhering stone called pozzolana was used to produce a concrete that was both light and extremely strong. The Romans had developed potsalana concrete about 100 B.C. but at first used it only for terrace walls and foundations. It apparently was emperor Nero who first used the material on a grand scale to rebuild a region of the city of Rome around his palace, the expansive Domus Aurea, after the great fire of AD 64 which he said to have set. Here broad streets, regular blocks of masonry apartment houses, and continuous colonnaded porticoes were erected according to a single plan and partially at state expense. The Domus Aurea itself was a labyrinth of concrete vaulted rooms, many in complex geometric forms. An extensive garden with a lake and forest spread around it. The architect Severus seems to have been in charge of this great project. Emperors and emperors' architects succeeding Nero and Severus continued and expanded their work of rebuilding and regularizing Rome. Vespasian (emperor AD 63-79) began the Colosseum. Which I have a model bad of. Built by prisoners from the Jewish wars the 50,000 Colosseum is one of the most intresting architectural feets of Rome. At its opening in 80 A.D. the Colosseum was flooded by diverting the Tiber river about 10 kilometers to renact a naval battel with over 3,000 participants. Domitian (81-96) rebuilt the Palatine Hill as a huge palace of vaulted concrete designed by his architect Rabirius. Trajan (97-117) erected the expansive forum that bears his name (designed by his architect Apollodorus) and a huge public bath. Hadrian (117-138) who served as his own architect, built the Pantheon as well as a villa the size of a small city for himself at Tivoli. Later Caracalla (211-217) and Diocletian (284-305) erected two mammoth baths that bear their names, and Maxentius (306-312) built a huge vaulted basilica, now called the Basilica of Constantine. The Baths of Caracalla have long been accepted as a summation of Roman culture and engineering. It is a vast building, 360 by 702 feet (110 by 214 meters), set in 50 acres (20 hectares) of gardens. It was one of a dozen establishments of similar size in ancient Rome devoted to recreation and bathing. There were a 60- by 120-foot (18- by 36-meter) swimming pool, hot and cold baths, gymnasia, a library, and game rooms. These rooms were of various geometric shapes. The walls were thick, with recesses, corridors, and staircases cut into them. The building was entirely constructed of concrete with barrel, groined, and domical vaults spanning as far as 60 feet (18 meters) in many places. Inside, all the walls were covered with thin slabs of colored marble or with painted stucco. The decorative forms of this coating were derived from Greek The rebuilding of Rome set a pattern copied all over the empire. Nearby, the ruins of Ostia, Rome's port (principally constructed in the 2nd and 3rd centuries AD), reflect that model. Farther away it reappears at Trier in northwestern Germany, at Autun in central France, at Antioch in Syria, and at Timgad and Leptis Magna in North Africa. When political disintegration and barbarian invasions disrupted the western part of the Roman Empire in the 4th century AD, new cities were founded and built in concrete during short construction campaigns: Ravenna, the capital of the Western Empire from 492-539, and Constantinople in Turkey, where the seat of the empire was moved by Constantine in 330 and which continued thereafter to be the capital of the Eastern, or Byzantine, Empire. Christian Rome. One important thing had changed by the time of the founding of Ravenna and Constantinople; after 313 this was the Christian Roman Empire. The principal challenge to the imperial architects was now the construction of churches. These churches were large vaulted enclosures of interior space, unlike the temples of the Greeks and the pagan Romans that were mere statue-chambers set in open precincts. The earliest imperial churches in Rome, like the first church of St. Peter's erected by Constantine from 333, were vast barns with wooden roofs supported on lines of columns. They resembled basilicas, which had carried on the Hellenistic style of columnar architecture. Roman concrete vaulted construction was used in certain cases, for example, in the tomb church in Rome of Constantine's daughter, Santa Costanza, of about 350. In the church of San Vitale in Ravenna, erected in 526-547, this was expanded to the scale of a middle-sized church. Here a domed octagon 60 feet (18 meters) across is surrounded by a corridor, or aisle, and balcony 30 feet (9 meters) deep. On each side a semicircular projection from the central space pushes outward to blend these spaces together. f:\12000 essays\technology & computers (295)\The Power On Self Test.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Power On Self Test When the system is powered on, the BIOS will perform diagnostics and initialize system components, including the video system. (This is self-evident when the screen first flicks before the Video Card header is displayed). This is commonly referred as POST (Power-On Self Test). Afterwards, the computer will proceed its final boot-up stage by calling the operating system. Just before that, the user may interrupt to have access to SETUP. To allow the user to alter the CMOS settings, the BIOS provides a little program, SETUP. Usually, setup can be entered by pressing a special key combination (DEL, ESC, CTRL-ESC, or CRTL-ALT-ESC) at boot time (Some BIOSes allow you to enter setup at any time by pressing CTRL-ALT-ESC). The AMI BIOS is mostly entered by pressing the DEL key after resetting (CTRL-ALT-DEL) or powering up the computer. You can bypass the extended CMOS settings by holding the key down during boot-up. This is really helpful, especially if you bend the CMOS settings right out of shape and the computer won't boot properly anymore. This is also a handy tip for people who play with the older AMI BIOSes with the XCMOS setup. It allows changes directly to the chip registers with very little technical explanation. A Typical BIOS POST Sequence Most BIOS POST sequences occur along four stages: 1. Display some basic information about the video card like its brand, video BIOS version and video memory available. 2. Display the BIOS version and copyright notice in upper middle screen. You will see a large sequence of numbers at the bottom of the screen. This sequence is the . 3. Display memory count. You will also hear tick sounds if you have enabled it (see Memory Test Tick Sound section). 4. Once the POST have succeeded and the BIOS is ready to call the operating system (DOS, OS/2, NT, WIN95, etc.) you will see a basic table of the system's configurations: · Main Processor: The type of CPU identified by the BIOS. Usually Cx386DX, Cx486DX, etc.. · Numeric Processor: Present if you have a FPU or None on the contrary. If you have a FPU and the BIOS does not recognize it, see section Numeric Processor Test in Advanced CMOS Setup. · Floppy Drive A: The drive A type. See section Floppy drive A in Standard CMOS Setup to alter this setting. · Floppy Drive B: Idem. · Display Type: See section Primary display in Standard CMOS Setup. · AMI or Award BIOS Date: The revision date of your BIOS. Useful to mention when you have compatibility problems with adaptor cards (notably fancy ones). · Base Memory Size: The number of KB of base memory. Usually 640. · Ext. Memory Size: The number of KB of extended memory. In the majority of cases, the summation of base memory and extended memory does not equal the total system memory. For instance in a 4096 KB (4MB) system, you will have 640KB of base memory and 3072KB of extended memory, a total of 3712KB. The missing 384KB is reserved by the BIOS, mainly as shadow memory (see Advanced CMOS Setup). · Hard Disk C: Type: The master HDD number. See Hard disk C: type section in Standard CMOS Setup. · Hard Disk D: Type: The slave HDD number. See Hard disk D: type section in Standard CMOS Setup. · Serial Port(s): The hex numbers of your COM ports. 3F8 and 2F8 for COM1 and COM2. · Parallel Port(s): The hex number of your LTP ports. 378 for LPT1. · Other information: Right under the table, BIOS usually displays the size of cache memory. Common sizes are 64KB, 128KB or 256KB. See External Cache Memory section in Advanced CMOS Setup. AMI BIOS POST Errors During the POST routines, which are performed each time the system is powered on, errors may occur. Non-fatal errors are those which, in most cases, allow the system to continue the boot up process. The error messages normally appear on the screen. Fatal errors are those which will not allow the system to continue the boot-up procedure. If a fatal error occurs, you should consult with your system manufacturer or dealer for possible repairs. These errors are usually communicated through a series of audible beeps. The numbers on the fatal error list correspond to the number of beeps for the corresponding error. All errors listed, with the exception of #8, are fatal errors. All errors found by the BIOS will be forwarded to the I/O port 80h. · 1 beep: DRAM refresh failure. The memory refresh circuitry on the motherboard is faulty. · 2 beeps: Parity Circuit failure. A parity error was detected in the base memory (first 64k Block) of the system. · 3 beeps: Base 64K RAM failure. A memory failure occurred within the first 64k of memory. · 4 beeps: System Timer failure. Timer #1 on the system board has failed to function properly. · 5 beeps: Processor failure. The CPU on the system board has generated an error. · 6 beeps: Keyboard Controller 8042-Gate A20 error. The keyboard controller (8042) contains the gate A20 switch which allows the computer to operate in virtual mode. This error message means that the BIOS is not able to switch the CPU into protected mode. · 7 beeps: Virtual Mode (processor) Exception error. The CPU on the motherboard has generated an Interrupt Failure exception interrupt. · 8 beeps: Display Memory R/W Test failure. The system video adapter is either missing or Read/Write Error its memory is faulty. This is not a fatal error. · 9 beeps: ROM-BIOS Checksum failure. The ROM checksum value does not match the value encoded in the BIOS. This is good indication that the BIOS ROMs went bad. · 10 beeps: CMOS Shutdown Register. The shutdown register for the CMOS memory Read/Write Error has failed. · 11 beeps: Cache Error / External Cache Bad. The external cache is faulty. Other AMI BIOS POST Codes · 2 short beeps: POST failed. This is caused by a failure of one of the hardware testing procedures. · 1 long & 2 short beeps: Video failure. This is caused by one of two possible hardware faults. 1) Video BIOS ROM failure, checksum error encountered. 2) The video adapter installed has a horizontal retrace failure. · 1 long & 3 short beeps: Video failure. This is caused by one of three possible hardware problems. 1) The video DAC has failed. 2) the monitor detection process has failed. 3) The video RAM has failed. · 1 long beep: POST successful. This indicates that all hardware tests were completed without encountering errors. If you have access to a POST Card reader, (Jameco, etc.) you can watch the system perform each test by the value that's displayed. If/when the system hangs (if there's a problem) the last value displayed will give you a good idea where and what went wrong, or what's bad on the system board. Of course, having a description of those codes would be helpful, and different BIOSes have different meanings for the codes. (could someone point out FTP sites where we could have access to a complete list of error codes for different versions of AMI and Award BIOSes?). BIOS Error Messages This is a short list of most frequent on-screen BIOS error messages. Your system may show them in a different manner. When you see any of these, you are in trouble - Doh! (Does someone has any additions or corrections?) · "8042 Gate - A20 Error": Gate A20 on the keyboard controller (8042) is not working. · "Address Line Short!": Error in the address decoding circuitry. · "Cache Memory Bad, Do Not Enable Cache!": Cache memory is defective. · "CH-2 Timer Error": There is an error in timer 2. Several systems have two timers. · "CMOS Battery State Low" : The battery power is getting low. It would be a good idea to replace the battery. · "CMOS Checksum Failure" : After CMOS RAM values are saved, a checksum value is generated for error checking. The previous value is different from the current value. · "CMOS System Options Not Set": The values stored in CMOS RAM are either corrupt or nonexistent. · "CMOS Display Type Mismatch": The video type in CMOS RAM is not the one detected by the BIOS. · "CMOS Memory Size Mismatch": The physical amount of memory on the motherboard is different than the amount in CMOS RAM. · "CMOS Time and Date Not Set": Self evident. · "Diskette Boot Failure": The boot disk in floppy drive A: is corrupted (virus?). Is an operating system present? · "Display Switch Not Proper": A video switch on the motherboard must be set to either color or monochrome. · "DMA Error": Error in the DMA (Direct Memory Access) controller. · "DMA #1 Error": Error in the first DMA channel. · "DMA #2 Error": Error in the second DMA channel. · "FDD Controller Failure": The BIOS cannot communicate with the floppy disk drive controller. · "HDD Controller Failure": The BIOS cannot communicate with the hard disk drive controller. · "INTR #1 Error": Interrupt channel 1 failed POST. · "INTR #2 Error": Interrupt channel 2 failed POST. · "Keyboard Error": There is a timing problem with the keyboard. · "KB/Interface Error": There is an error in the keyboard connector. · "Parity Error ????": Parity error in system memory at an unknown address. · "Memory Parity Error at xxxxx": Memory failed at the xxxxx address. · "I/O Card Parity Error at xxxxx": An expansion card failed at the xxxxx address. · "DMA Bus Time-out": A device has used the bus signal for more than allocated time (around 8 microseconds). If you encounter any POST error, there is a good chance that it is an HARDWARE related problem. You should at least verify if adaptor cards or other removable components (simms, drams etc...) are properly inserted before calling for help. One common attribute in human nature is to rely on others before investigating the problem yourself. f:\12000 essays\technology & computers (295)\The Telephone System.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The telephone is one of the most creative and prized inventions in the world. It has advanced from its humble beginnings to its wireless communication technology today and for the future. The inhabitants of the earth have long communicated over a distance, which has been done by shouting from one hilltop or tower to another. The word "telephone" originated from a combination of two Greek words: "tele", meaning far off, and "phone", meaning voice or sound, and became the known term for "far- speaking." A basic telephone usually contains a transmitter, that transfers the caller's voice, and a receiver, that amplifies sound from an incoming call. In the transmitter there are two common kinds of transmitters: the carbon transmitter, and the electret transmitter. The carbon transmitter uses carbon granules between metal plates called, electrodes, with one consisting of a thin diaphragm that moves by pressure from sound waves and transmits them to the carbon granules. These electrodes conduct electricity flowing through the carbon. The sound waves hit the diaphragm causing the electrical resistance of the carbon to vary. The electret transmitter is composed of a thin disk of metal-coated plastic held above a thicker, hollow metal disk. This plastic disk is electrically charged, and creates an electric field. The sound waves from the caller's voice cause the plastic disk to vibrate, changing the distance between the disks, thus changing the intensity of 2 the electric field. These variations are translated into an electric current which travels across the telephone lines. The receiver of a telephone is composed of a flat ring of magnetic material. Underneath this magnetic ring is a coil of wire where the electric current flows. Here, the current and magnetic field from the magnet cause a diaphragm between the two to vibrate, and replicate the sounds that are transformed into electricity. The telephone is also composed of an alerter and a dial. The alerter, usually known as the ringer, alerts a person of a telephone call, created by a special frequency of electricity sent by the telephone number typed in. The dial is the region on the phone where numbers are pushed or dialed. There are two types of dialing systems; the rotary dial, and the Touch-Tone. The rotary dial is a movable circular plate with the numbers one to nine, and zero. The Touch-Tone system uses buttons that are pushed, instead of the rotary that send pulses. The telephone was said to be invented by many people. However, the first to achieve this success, although by accident, was Alexander Graham Bell. He and his associate were planning to conduct an experiment, when Mr. Bell spilt acid on himself in another room, and his associate clearly heard the first telephone message: "Mr. Watson, come here; I want you." Although Alexander Graham Bell had invented the telephone, his case had to be defended in court more than 600 times for this to be proven. After the invention of the telephone, many other great technological advances were made, which boosted the telephone into a worldwide affair. The first great advance was the invention of automatic switching. Next, long distance telephone calls were established in small steps. For example, from city to city, across a country, and across 3 the ocean. Following this, undersea cable and satellites, which made it possible to link points halfway around the earth sounding as if from next door. Finally, by adding three digit area codes, all phone calls, either to next door or around the world, could be done by the caller. The first telephone company to establish a telephone industry was the Bell Telephone Company, in 1877, by Alexander Graham Bell. This did last for sometime, however, independent telephone companies were started in many cities and small towns. By 1908, many customers were being served by a new company called AT&T, which eventually bought out the Bell Company. Since it was costly to have the wires run to a household, many residential people often shared lines, which is called a party line. Although these lines were cheaper for the customers, it was a nuisance because only one person could use the phone at a time, and other households could listen in on the calls. Finally, the price of local calls was relatively low, however, long-distance calls were placed relatively high when compared to the local telephone bill. Today, approximately 95% of the households across North America have telephones, which is creating a huge opportunity for companies that provide local and long-distance service. Although prices for calls are slowly decreasing, the competition between companies is increasing. This can be seen from advertisements on television and in the newspaper. And not only is this competing going to continue, it will increase as new technology is discovered. 4 What is in store for the future? No one will now. However, some of the latest futuristic ideas that will soon be upon us are; television screens soon accompany the telephone, so that the caller can see who he or she is having a conversation with. Also, having all of the copper wire replaced with fiber optics will greatly increase the telephones capabilities. This will give us the advantage of sending very large pieces of information over the phone line. The only thing that we do know about the telephone, is that it sure has come a long way since its first discovery by the inventor Alexander Graham Bell. A man who will always be remembered. f:\12000 essays\technology & computers (295)\The Unkindest Cut Censorship Online.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ There is a section of the American populace that is slowly slithering into the spotlight after nearly two decades in clandestine. Armed with their odd netspeak, mouses, glowing monitors, and immediate access to a world of information, both serious and amateur Hackers alike have at last come out of the computer lab and into mainstream pop culture. Since I despise pleading ignorant about anything, I chose to read Mr. McDonalds article because of its minutia concerning the future of the more amusing aspect of computing: the game. This article is relevant because whether we like it or not, the PC (personal computer) is only going to grow in popularity and use, and the best weapon against the abuse of this new gee-whiz technology is to be educated about it. It is simply amazing how far gaming has come in the past decade. We have gone from stick figures on a blank screen to interactive movies. The PC is the newest way to play because it has the capability to process and display much more complex games than anything by Nintendo or Sega. Some problems with this, however, are the enormous cost of s descent system and software and the technology that moves at lightning speed. The computer you buy tomorrow will not be able to handle any of the new software two years from now. Owners must not only keep up with the new trends but must also be well aware of what their own system can sustain so that they do not overload it and cause it to crash. This article focuses on interactive video, which is a relatively new field in the gaming industry. The games that have been on the market have not lived up to the bombardment of advertising gamers have been subjected to. The video itself is often choppy and blurry, it rarely enhances the plot of the game, and has yet to be truely interactive. This is because it is not part of a movies nature to mingle with the audience. New software consumers should be aware of this before shelling out $60-$80 for an over-hyped game. This article offers the titles of the few good interactive games that have hit the shelves this year as well as a list of ones to avoid. It also describes several of the video cards (special flat chips that can be inserted into the back of your machine to help it process data) that you would have to purchase to play these games. It does a wonderful job of informing the readers about the games and hardware in terms that even a new gamer (a newbie) would be able to grasp. Often, many computing magazines will use Hacker lingo (netspeak) so frequently that the meaning and fact are lost. The article suggests that avoiding the whole genre for a few years until the industry polishes its product is the best move. From the experiences I have had with computer games of all kinds, I would have to agree. f:\12000 essays\technology & computers (295)\Truth and Lies about Computer Viruses.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Truth and Lies About the Computer Virus Walk into any computer store today and there will be at least twenty or thirty computer virus programs. From the looks of it computer viruses have gotten out of hand and so has the business of stopping it. The computer user must cut through the media hype of apocoliptic viruses and shareware programs and discover the real facts. Before we even start the journey of exploring the computer virus we must first eliminate all the "fluff." The computer user needs to understand how information about viruses reaches the public. Someone creates the virus and then infects at least one computer. The virus crashes or ruins the infected computer. A anti-virus company obtains a copy of the virus and studies it. The anti-virus company makes an "unbiased" decision about the virus and then disclose their findings to the public. The problem with the current system is that there are no checks and balances. If the anti-virus company wants to make viruses seem worse all they have to do is distort the truth. There is no organization that certifies wheather or not a virus is real. Even more potentially harmful is that the anti-virus companies could write viruses in order to sell their programs. Software companies have and do distort the truth about viruses. "Antivirus firms tend to count even the most insignificant variations of viruses for advertising purposes. When the Marijuana virus first appeared, for example, it contained the word "legalise," but a miscreant later modified it to read "legalize." Any program which detects the original virus can detect the version with one letter changed -- but antivirus companies often count them as "two" viruses. These obscure differentiations quickly add up." http://www.kumite.com/myths/myth005.htm Incidentally the Marijuana virus is also called the "Stoned" virus there by making it yet another on the list of viruses that companies protect your computer against. I went to the McAfee Anti-virus Web site looking for information on the Marijuana virus but was unable to obtain that information. I was however able to get a copy of the top ten viruses of their site. On specific virus called Junkie: "Junkie is a multi-partite, memory resident, encrypting virus. Junkie specifically targets .COM files, the DOS boot sector on floppy diskettes and the Master Boot Record (MBR). When initial infection is in the form of a file infecting virus, Junkie infects the MBR or floppy boot sector, disables VSafe (an anti-virus terminate-and-stay-resident program (TSR), which is included with MS-DOS 6.X) and loads itself at Side 0, Cylinder 0, Sectors 4 and 5. The virus does not become memory resident, or infect files at this time. Later when the system is booted from the system hard disk, the Junkie virus becomes memory resident at the top of system memory below the 640K DOS boundary, moving interrupt 12's returns. Once memory resident, Junkie begins infecting .COM files as they are executed, and corrupts .COM files. The Junkie virus infects diskette boot sectors as they are accessed. The virus will write a copy of itself to the last track of the diskette, and then alter the boot sector to point to this code. On high density 5.25 inch diskettes, the viral code will be located on Cylinder 79, Side 1, Sectors 8 and 9." Junkie's description is that of a basic stealth/Trojan virus which have been in existance for 10 years. They also listed Anti-exe as one of the top ten viruses but did not acknowlege the fact that it has three aliases. It's no wonder that the general public is confused about computer viruses! I decided to investigate the whole miss or diss-information issue a little further. I went to the Data Fellows Web site to what the distributors of F-prot had to say about viruses. It is to no surprise that I found them trying to see software with the typical scare tactics: Quite recently, we read in the newspapers how CIA and NSA (National Security Agency) managed to break into the EU Commission's systems and access confidential information about the GATT negotiations. The stolen information was then exploited in the negotiations. The EU Commission denies the allegation, but that is a common practice in matters involving information security breaches. At the beginning of June, the news in Great Britain told the public about an incident where British and American banks had paid 400 million pounds in ransom to keep the criminals who had broken into their systems from publicizing the systems' weaknesses [London Times, 3.6.1996]. The sums involved are simply enormous, especially since all these millions of pounds bought nothing more than silence. According to London Times, the banks' representatives said that the money had been paid because "publicity about such attacks could damage consumer confidence in the security of their systems". Criminal hackers are probably encouraged by the fact that, in most cases, their victims are not at all eager to report the incidents to the police. And that is not all; assuming that the information reported by London Times is correct, they may even get paid a "fee" for breaking in... a computer is broken into in Internet every 20 seconds... Whatever the truth about these incidents may be, the fact remains that current information systems are quite vulnerable to penetration from outside. As Internet becomes more popular and spreads ever wider, criminals can break into an increasing number of systems easily and without a real risk of being caught." Then the next paragraph stated: "Even at their initial stages, Data Fellows Ltd's F-Secure products meet many of these demands. It is the goal of our continuing product development to eventually address all such information security needs." In other words nothing is safe unless you buy their products. Now that we have cleared the smoke on viruses we know that there are only roughly 500 basic viruses. These viruses are tweaked, renamed, and re-cycled. So, what is a virus? First of all, we must be aware that there is no universally accepted naming practice or discovery method for viruses. Therefore all virus information is subjective and subject to interpretation and constant dispute. To define a virus we must ask an expert. According to Fred Cohen a computer virus is a computer program that can infect other computer programs by modifying them in such a way as to include a (possibly evolved) copy of itself. This does not mean that a virus has to cause damage because a virus may be written to gather data and obtain hidden files in your system. Now that you are aware of the hoaxes and miss-information about viruses you will be better equipped to deal with viral information. The next time you hear of a killer virus just remember what you have learned. You know that all viruses have the same roots. f:\12000 essays\technology & computers (295)\V chip.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ What is a V-chip? This term has become a buzz word for any discussion evolving telecommunications regulation and television ratings, but not too many reports define the new technology in its fullest form. A basic definition of the V-chip; is a microprocessor that can decipher information sent in the vertical blanking of the NTSC signal, purposefully for the control of violent or controversial subject matter. Yet, the span of the new chip is much greater than any working definition can encompass. A discussion of the V-chip must include a consideration of the technical and ethical issues, in addition to examining the constitutionally of any law that might concern standards set by the US government. Yet in the space provided for this essay, the focus will be the technical aspects and costs of the new chip. It is impossible to generally assume that the V-chip will solve the violence problem of broadcast television or that adding this little device to every set will be a first amendment infringement. We can, however, find clues through examining the cold facts of broadcast television and the impact of a mandatory regulation on that free broadcast. "Utilizing the EIA's Recommended Practice for Line 21 Data Service(EIA-608) specification, these chips decode EDS (Extended Data Services)program ratings, compare these ratings to viewer standards, and can be programmed to take a variety of actions, including complete blanking of programs." Is one definition of the V-chip from Al Marquis of Zilog Technology. The FCC or Capitol Hill has not set any standards for V-chip technology; this has allowed many different companies to construct chips that are similar yet not exact or possibly not compatible. Each chip has advantages and disadvantages for the rating's system, soon to be developed. For example, some units use onscreen programming such as VCR's and the Zilog product do, while others are considering set top options. Also, different companies are using different methods of parental control over the chip. Another problem that these new devices may incur when included in every television is a space. The NTSC signal includes extra information space known as the subcarrier and Vertical blanking interval. As explained in the quotation from Mr. Marquis, the V-chips will use a certain section of this space to send simple rating numbers and points that will be compared to the personality settings in the chip. Many new technologies are being developed for smart-TV or data broadcast on this part of the NTSC signal. Basically the V-chip will severely limit the bandwidth for high performance transmission of data on the NTSC signal. There is also to be cost to this new technology, which will be passed to consumers. Estimates are that each chip will cost six dollars wholesale and must be designed into the television's logic. The V-chip could easily push the price of televisions up by twenty five or more dollars during the first years of production. The much simpler solution of set top boxes allows control for those who need it and allow those consumers who don't to save money and use new data technology. Another cost will most definitely be levied to television advertisers for the upgrade of the transmitting equipment. Weather the V-chip encoding signal is added upstream of the transmitter or directly into uplink units and other equipment intended for broadcast; this cost will have to compensated for in advertising sales and prices. The V-chip regulation may also require another staff employee at most stations to effectively rate locally aired programs and events. All three of these questions have been addressed in minute detail. Most debate has focused upon the new rating system and its implementation. Though equally important, this doesn't deal with the ground floor concerns for the television producing and broadcasting industries. Now as members of the industry we must hold our breath until either the fed knocks the wind from free broadcast with mandatory ratings' devices, or allows the natural regulation to continue. f:\12000 essays\technology & computers (295)\video card.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Introduction People are living in a three-dimensional space. They know what is up, down, left, right, close and far. They know when something is getting closer or moving away. However, the traditional personal computers can only make use of two dimensional space due to relatively low technology level of the video card in the past. As the new technology has been introduced to the video card industry in recent years, the video card can now render 3D graphics. Most of the PC computer games nowadays are in three dimensions. In addition, some web sites also apply the use of three dimensional space. This means that they are no longer a flat homepage, but instead a virtual world. With that added dimension, they all look more realistic and attractive. Nevertheless, 3D do not exist in most of the business programs today, but it can be forecasted that it is not far away. Many new kinds of video cards have been introduced to the market recently. In the past, the video card could only deliver two dimensional graphics which were only in low resolution. However, there has now emerged as a result of high resolution three dimensional graphics technology. This paper will discuss why the video card nowadays can process high resolution three dimensional graphics, but why the video card in the past could only process low resolution two dimensional graphics. The explanation will be based on some recently developed video cards such like Matrox Millenium. This paper will also discuss how the 3D graphic displays on a 2D monitor. Lastly, the video card, Matrox Millennium, will also be discussed. Basic principles In order to understand the recent development of the video card, let's take a look on how a video card works. The video card is a circuit, which is responsible for processing the special video data from the central processing unit (CPU) into a format that the visual display unit (VDU) or monitor can understand, to form a picture on the screen. The Video Chipset, the Video Memory ( Video RAM ) and the Digital Analog Converter ( RAM DAC ) are the major parts of a video card. After the special video data leaves the CPU, it has to pass through four major steps inside the video card before it reaches the VDU finally. First, the special video data will transfer from the CPU to the Video Chipset, which is the part responsible for processing the special video data, through the bus. Secondly, the data will transfer from the Video Chipset to the Video Memory which stores the image displayed on a bitmap display. Then, the data will transfer to the RAM DAC which is responsible for reading the image and converting the image from digital data to analog data. It should be noted that every data transfer inside the computer system is digital. Lastly, the analog data will transfer from the RAM DAC to the VDU through a cable connected between them outside the computer system. The performance of a video card is mainly dependent upon its speed, the amount and quality of the Video Memory, the Video Chipset and the RAM DAC. The faster the speed, the higher the picture quality and resolution the video card can deliver. This is due to the fact that the picture on the VDU has to change continuously, and this change must be made as fast as possible in order to display a high quality and realistic image. In the process of transferring data from the CPU to the Video Chipset, the speed is mainly dependent upon the type and speed of the bus, the mainboard and its chipset. The amount of the Video Memory is also responsible for the color and screen resolution. The higher the amount of the Video Memory, the higher the color depth the video card can render. On the other hand, the type of the Video RAM is an another factor that affects the speed of the video card. The Video Chipset is the brain of a video card. It similar to the CPU in the motherboard. However, unlike the CPU which can be fitted with different motherboards, certain Video Chipsets can only be fitted with certain video cards. The Video Chipset is responsible for processing the special video data received from the CPU. Thus, it determines all the performance aspects of the video card. The RAM DAC is the part responsible for the refresh rates of the monitor. The quality of the RAM DAC and its maximum pixel frequency, which is measured in MHz, are the factors affecting the refresh rates. In fact, a 220 MHz RAM DAC is not necessarily but most likely better than a 135 MHz one. Recent developments Traditionally, the personal computer can only deliver two dimensional pictures. However, as people want to increase their living standards, they want the picture on their personal computer be more realistic and attractive. Thus, the display of three dimensional pictures in the personal computer is being developed. The rendering of the 3D image requires the computer to update the screen of the VDU at least 15 times per second as the one navigate through it, and each of the objects have to go through the transformation in depth space which is known as the z-axis, and is on the coordinate of the x-y plane. Nevertheless, the video card in the past was not "powerful" enough to render the three dimensional graphics. The introduction of some new kind of video cards in recent years has solved this problem, and are able to render 3D graphics now. In the past, the video card could only deliver two dimensional graphics because the technology at that time limited what they can do. One of the problems is that the speed of the transfer of data from the CPU to the Video Chipset was relatively low, but it is actually not the problem associated with the video card. It is associated with the type of the CPU, the bus and the motherboard in the computer system. On the other hand, the biggest problem is actually the quality of Video RAM. The Video RAM is the part in a video card which is situated between two very busy devices, the Video Chipset and the RAM DAC; and the Video RAM has to serve both of them all the time. Whenever the screen has to change, the Video Chipset has to change the content in the Video Memory. On the other hand, the RAM DAC has to read the data from the Video Memory continuously. This means that when the Video Memory is reading the data from the Video Chipset, the RAM DAC has to wait aside. Whenever the video card has to render three dimensional graphics, the screen has to change at least 15 times per second which means that more data has to be transferred from the Video Chipset to the Video Memory, and the data has to be read faster by the RAM DAC. However, the video card, or referred to as the Video Memory, at that time did not have such technology to achieve this kind of process. Thus, the video card in the past was not able to deliver three dimensional graphics. In recent years, the video card manufacturer has developed some high technology to solve the problem of the poor Video Memory. They have found three different ways to deal with this problem which involves using a higher quality of Video Memory, increasing the video memory bus size, and increasing the clock speed of the video card. 1 ) Dual ported Video RAM The major step is to make the Video RAM dual ported. This means that when data is transferred from the Video Chipset to the Video Memory via one port, the RAM DAC can read the data from the Video Memory through an independent second port. Thus, these two processes can occur at the same time. Both the Video Chipset and the RAM DAC need not wait for each other anymore. This kind of RAM is called VRAM. Of course, the technology applied is not just double the port in the RAM; it is actually very complicated. Thus, VRAM is more expensive than the normal one. The invention of the VRAM can offer a higher refresh rate and higher color depth of the graphic on the monitor. The high refresh rate means that the RAM DAC will send a complete picture to the monitor more frequently. Therefore, the RAM DAC has to read the data from the Video Memory more often. However, when the video card in the past, which without the VRAM, wants to achieve this high refresh rate, it has to lower the video performance as the Video Memory cannot afford this kind of heavy work load. As to maintain the high refresh rate and high video performance at the same time, the VRAM has to be used since this kind of RAM can serve the Video Chipset and the RAM DAC at the same time. Thus, the video card need not reduce the video performance when a higher refresh rate occurs. On the other hand, to archive the high color depth, the Video Memory has to read more data from the Video Chipset per time, and thus more data will be sent to the RAM DAC . This process will surely take a longer time. At an 8 bit color resolution ( 256 color ), a 1024 ´ 768 screen needs 786432 bytes of data to be read by RAM DAC from the Video Memory. For the same screen, a 24 bit color resolution ( 16777216 color) needs 2359296 bytes of data to be read by the RAM DAC. For similar reasons, if the video card in the past wants to archive this kind of high color depth, it has to lower the refresh rate. This problem can also be solved by the use of the VRAM. In short, the new video card with VRAM can provide a high refresh rate and high color depth at the same time. Thus, the render of three dimensional graphics is possible now. The WRAM is used in the Martox card instead of the VRAM. The WRAM is developed by the Martox company. It is such like the VRAM which is dual ported. However, the WRAM is designed smarter than the VRAM, so it is faster. Ironically, the WRAM is even cheaper than the VRAM. Lastly, there are many different types of the Video RAM such as DRAM (Dynamic RAM), EDO DRAM (Extended Data Out DRAM), SDRAM (Synchronous DRAM), SGRAM (Synchronous Graphics RAM), MDRAM (Multibank DRAM), and RDRAM (RAMBUS DRAM). Unlike the VRAM and WRAM, they are all single ported and so are slower. The DRAM is the slowest one amongst all of them. 2 ) Increase video memory bus size Three years ago, the release of the 32 bit video card amazed people all over the world. However, the 64 bit video card is being introduced nowadays, which has a 64 bit video memory bus inside it. In addition, the 128 bit video card is also available. The video memory bus is a path which links the Video Chipset, the Video RAM and the RAM DAC together. With the 64 bit video memory bus, 8 bytes of data can be transferred in one clock cycle while 4 bytes data with 32 bit video memory bus. Thus, the amount of data transfer is doubled with the use of the 64 bit video card. It is important to notice that a 1 MB Video RAM usually has only a 32 bit data bus. Thus, a 64 bit video card should always work with at least 2MB Video RAM; otherwise, this 64 bit video card will not be able to use its 64 bit data path. All in all, with the use of a 64 bit video card, more data can be transferred at one time. Thus, it actually can shorten the time to transfer data from the Video Chipset to the Video RAM or from the Video RAM to the RAM DAC. This means that a higher color resolution graphic can be rendered. 3 ) Increase the clock speed The third one is the most obvious one which just increases the clock speed of the Video Chipset and the Video RAM. Of course, the technology to increase the clock speed is very complicated. The fastest Video Chipset so far is the ET 6000 chipset which can run at 100 MHz, while the fastest video memory is SDRAM which can run at clock speed up to 125 MHz. The SDRAM is a special graphic version of SDRAM ( synchronous DRAM ). It is not just the job of the video card to archive high resolution three dimensional graphics. The video card has to work with a good computer system. To recall the speed of the transfer of the data from the CPU to the Video Chipset is mainly dependent upon the bus type, the mainboard and its chipset. Thus, a good computer system to perform good graphics should have a PCI bus which runs at 33MHz with Pentium processor, a Pentium processor with MMX technology, and a good mainboard such as Intel 430 HX chipset which will affect the PCI performance. 3D graphics on 2D monitor Although the video card can render 3D graphics now, the monitor that the graphic displays on is still flat two dimensions. Thus, the three dimensional graphic has to be mapped to the 2D screen. This is done using perspective algorithms. This means that if an object is farther away, it will appear smaller; if it is closer, it will appear larger. To display 3D animations, an object is first presented as a set of vertices in a three dimensional coordinates which is x, y, z axes. The vertices of the object is then stored in the Video RAM. Afterwards, the object has to be rendered. Rendering is a process, which referred to calculate the different color and position information, which will make the user believe that there is a 3D graphic on a flat 2D screen. To make the calculation more efficiently, the vertices of the object are segmented into triangles. Rendering also fills in all of the points on the surface of the object which were only saved as a set of vertices previously. In this way, an object with 3D effect is able to display on a flat 2D monitor. A new video card - Matrox Millennium Lastly, let's discuss some new features of a new video card - Matrox Millennium. Matrox Millennium is a 64-bit video card. It can be work with 2MB or 4MB or even 8MB video RAM. The video RAM are the Matrox company authorized WRAM. It also has a powerful 220 MHz RAMDAC. Actually, it is the fastest video card available in the market now. However, according to its extreme high speed, the graphics quality is relatively lower when compared to other video cards. The following is a summary of the new 3D features of the Matrox Millennium: Texture mapping : This applies bitmapped texture images which are stored in memory to objects in the screen so as to add realism. Bilinear and trilinear filtering : They smooth textures in a scene to lessen the blocky effect. With MIP ( multim in parvum ) mapping, an application provides different resolutions of an object as they move closer or further in the screen. Perspective correction : This rotates the texture bitmaps to give a better sense of convergence. Thus, when the video card renders a continuous moving object such as a meadow, it is able to maintain a realistic look as it recedes from the viewer. Anti - aliasing : This diminishes the "stair step" effect since the computer generated image has a finite discrete resolution. Alpha blending : This allows one object to show through another to create a transparent look. Atmospheric effects : This usually make use of the alpha blending. The effects are like fog and lighting cues. Flat shading : This is a technique where an whole triangle is a single color. Thus, this can create a blocky effect. Gouraud shading : This is a more advance method than the flat shading. It improves the overall appearance of the graphics and allows curves to be more round. Z-buffering : This techniques is one of the most important features to render 3D graphics. This controls how objects overlay one another in the third dimension. It is particularly important when filled polygons are included in the drawing. With Z buffering off, objects are drawn in the order in which they are transmitted to the display. With Z buffering on, objects are drawn from the back to the front. Matrox Millennium can also playback a movie with the use of Moving Picture Experts Group (MPEG). With this technology, the video card can compress the movie data into a special format. With the Chroma-key feature, the video card also supports for "blue-screen" video effects, so that two unrelated displays can easily be pasted together. Moreover, if the video card has the Image scaling feature, it can map a video onto any window or screen size desired. References Magazine · PC Magazine - December 3, 1996, Vol. 15, NO. 21 Internet · http://www.dimension3d.com · http://wfn-shop.princeton.edu/cgi-bin/foldoc · http://www-sld.slac.stanford.edu/HELP/@DUCSIDA:IDAHELP/DSP/INTERACTIVE/ · http://www.ozemail.com.au/~slennox/hardware/video.htm#memory · http://www.imaginative.com/VResources/vr_artic/marcb_ar/3dcards/3dcards.html · http://www.atitech.com · http://www.matrox.com · http://www.diamondmm.com · http://www.tseng.com · http://www.s3.com f:\12000 essays\technology & computers (295)\Video On Demand.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ OVERVIEW OF VIDEO ON DEMAND SYSTEMS Joseph Newcomer SCOPE INTRODUCTION THE INITIATIVE FOR WORLDWIDE MULTIMEDIA TELECONFERENCING AND VIDEO SERVER STANDARDS NEW BUSINESS IMPERATIVES STARTING WITH STANDARDS TWO STANDARDS, ONE GOAL STANDARDS FIRST SUMMARY CONTENT PREPARATION: REQUIREMENTS: CODECs/Compression Object Oriented Database Management Systems Encoding Verification SUMMARY VIDEO SERVER REQUIREMENTS LIMITATIONS PRODUCTS DISTRIBUTION NETWORK: LAN TYPES PROTOCOLS WAN TYPES CLIENT INTERFACES RETRIEVAL INTERFACE VIEWER REQUIREMENTS PRODUCTS HARDWARE MINIMUMS SUMMARY DEFINITIONS A C D E F G H I J L M N O P S T BIBLIOGRAPHY: MULTIMEDIA: WEB Sites: Hard Copy References: WANS/GOV: WEB Sites: Hard Copy References: ODBMS: WEB Sites: Hard Copy References: MPEG: WEB Sites: Hard Copy References: LANS: WEB Sites: TOPICS FOR FUTURE MEETINGS: THE ATM ADAPTION LAYER ATM STANDARDS ISDN-B BROADBAND WAN IMPLEMENTATION VIDEO CONFERENCING ODBMS VIDEO ENCODING/DECODING STANDARDS SCOPE Video on demand has evolved as a major implementation problem for network integrators. Clients want the ability to retrieve and view stored video files asynchronously at near broadcast quality, on a local host. Some problems integrators face to achieve this goal include: video content preparation, server storage, network throughput, latency, client interfaces, quality of service, and cost. This paper addresses the design considerations for a private video on demand implementation. INTRODUCTION The Initiative for Worldwide Multimedia Teleconferencing and Video Server Standards The market for multipoint multimedia teleconferencing and video server equipment is poised for explosive growth. The technology for this necessary and much-anticipated business tool has been in development for years. By the turn of the century, teleconferences that include any combination of video, audio, data, and graphics will be standard business practice. Compliance with teleconferencing standards will create compatible solutions from competing manufacturers, feeding the market with a variety of products that work together as smoothly as standard telephone products do today. Specifically, with the adoption of International Telecommunications Union (ITU) recommendations T.120, H.320 and H261, multimedia teleconferencing equipment manufacturers, developers, and service providers will have a basic established connectivity protocol upon which they can build products, applications, and services that will change the face of business communications. New Business Imperatives Voice on Demand systems are starting to be required by commercial, industrial, governmental and military associations to retrieve past information in order to prepare and anticipate future events. This preparation and anticipation can be crucial to the survival of these industries because of the key roll of the individuals or groups being monitored. It is this monitoring and collection of data that allows these organizations to make informed decisions and to take the appropriate action to current events. Multipoint multimedia teleconferencing and video servers offer the required solution. As defined here, it involves a user-specified mix of traditional voice, motion video, and still-image information in the same session. The images can be documents, spreadsheets, simple hand-written drawings, highly-detailed color schematics, photographs or video clips. Participants can access the same image at the same time, including any changes or comments on that image that are entered by other participants. Video servers allow users to view stored video files of specific events, conferences, news clips and important information in near realtime. The benefits are obvious. Instead of text interpretation of a video clip, all interested parties can access the information. Little is left to verbal interpretation since all users have access to the original video. In the case of video clips, a persons actions, verbal tones, mannerisms and reactions to events around them can be viewed and interpreted. Increased productivity, reduced cost, and reduced travel time are the primary benefits while proprietary technology and solutions are specified as the primary inhibitors of using video on demand products and services. Starting with Standards While multimedia teleconferencing and video servers promise to revolutionize vital everyday corporate tasks such as project management, training, and communication between geographically-dispersed teams, it is clear that standards-based solutions are a prerequisite for volume deployment. Standards ensure that end-users are not tied to any one supplier's proprietary technology. They also optimize capital investment in new technologies and prevent the creation of de facto communication islands, where products manufactured by different suppliers do not interoperate with each other or do not communicate over the same type of networks. When adopted and adhered to by equipment suppliers and service providers alike, standards represent the most effective and rational market-making mechanism available. ISDN, fax, X.25, and GSM are a few obvious examples of standards-based technologies. Without internationally-accepted standards and the corresponding ability to interoperate, the services based on these technologies would almost certainly languish as simple curiosities. Interoperability is particularly important in multipoint operation, where more than two sites communicate. A proprietary solution might suffice if two end users want to communicate only with each other; however, this limited type of communication is rare in today's business world. In typical business communications, multiple sites, multiple networks, and multiple users have communications equipment from multiple manufacturers, requiring the support of industry standards to be able to work together. This interoperability is also critically important when a video server may be transmitting data across a WAN to multiple users, in multiple sites. Perhaps the most important effect of standards is that they protect the end users' investments. A customer purchasing a standards-based system can rely on not only the current interoperability of his equipment but also the prospect of future upgrades. In the end, standards foster the growth of the market by encouraging consumer purchases. They also encourage multiple manufacturers and service providers to develop competing and complementary solutions and services. Two Standards, One Goal Fortunately, standards for multimedia teleconferencing are at hand. Working within the United Nations-sanctioned ITU's Telecommunications Standardization Sector, two goals have been achieved: the T.120 audiographics standards and the H.320 videotelephony standards. T.120, H.320 and H.261 are "umbrella" standards that encompass the major aspects of the multimedia communications standards set. The T.120 series governs the audiographic portion of the H.320 series and operates either within H.320 or by itself. Ratification of the core T.120 series of standards is complete. These recommendations specify how to use a set of infrastructure protocols to efficiently and reliably distribute files and graphical information in a multipoint multimedia meeting. The T.120 series consists of two major components. The first addresses interoperability at the application level, and includes T.126 and T.127. The second component includes three infrastructure components: T.122/T.125, T.124, and T.123. The H.320 standards were ratified in 1990, but work continues to encompass connectivity across LAN-WAN gateways. The existing H.320 umbrella covers several general types of standards that govern video, audio, control, and system components. With many businesses using LANs to connect their PCs, the pressure is on to add videoconferencing to those networks. Since the H.320 standards currently address interoperability of video conferencing equipment across digital WANs, it is a logical and necessary step to expand the standards to address LAN connectivity issues. As the work to expand H.320 continues, it remains the accepted standard. Both the T.120 and the H.320 series of standards will be improved upon and extended to cover networks and provide new functionality. This work will maintain interoperability with the existing standards. Standards First Standards as complex and universal as the H.320 and T.120 series need a coordination point for the interim steps a proposal takes on its way to becoming a standard. The IMTC is an international group of more than 60 industry-leading companies working to complement the efforts of the ITU-T with an emphasis on assisting the industry to bring standards-based products successfully to the market. Its goals include promoting open standards, educating the end user and the industry on the value of standards compliance and applications of new technologies, and providing a forum for the discussion and development of new standards. The IMTC is approved as an ITU-T liaison, and interfaces with the ITU-T by participating in standards discussion and development, feeding information and findings into the appropriate ITU-T Study Groups. The Standards First initiative encourages multimedia equipment manufacturers to start with compliance to at least the H.320 T.120 and H.261 standards described above. Further standards compliance is recommended but optional, and manufacturers will still have the ability to differentiate their products with proprietary features, creating Standards Plus products. Compliance to the minimum H.320/T.120 standards will ensure a basic level of connectivity across equipment from all participating manufacturers. Summary Standards have played an important part in the establishment and growth of several consumer and telecommunications markets. By creating a basic commonality, they insure compatibility among products from different manufacturers, thereby encouraging companies to produce varying solutions and end users to purchase products without fear of obsolescence or incompatibility. The work of both the IMTC and the ITU-T represents an orchestrated effort to promote a basic connectivity protocol that will encourage the growth of the multimedia telecommunications market. The Standards First initiative, which has been accepted by several industry leading companies, requires a minimum of H.320, H.261 and T.120 compliance to establish that basic connectivity. Manufacturers are then able to build on the basic compliance by adding features to their products, creating Standards Plus equipment. By insuring interoperability among equipment from competing manufacturers, developers, and service providers, Standards First ensures that a customer's initial investment is protected and future system upgrades are possible. Content Preparation: The first step in a VOD system is the entry of Video information. The possible sources of video information in a large scale (Government) VOD system include: Recorded and Live video, Scanned Images, EO, IR, SAR collected Images. Recorded video is the primary concern of this paper. Since latency and jitter do not effect Imagery data types they will be noted but not expanded upon. Live video is the primary concern of video conferencing, but the requirements do overlap with recorded (VOD) video. REQUIREMENTS: Recorded video must be digitized and compressed as soon as possible in the VOD architecture to minimize the system storage requirements. The Motion Picture Experts Group of the ISO developed the MPEG-1 and MPEG-2 standards for video compression. With MPEG 1 a 50 to 1 ratio is typical. MPEG-1 can encode images at up to 4k X 4k X 60 frames/sec. MPEG-2 was optimized for digital compression of TV and supports rates up to 16K X 16K X 30 frames/sec, but 1920 x 1080 x 30 frames/sec is considered broadcast quality (MPEG-2, Hewlet Packard pub. 5963-7511E). MPEG-2 offers a more efficient means to code interlaced video signals such as those which originate from electronic cameras. (Chadd Frogg 8/95) CODECs/Compression CODECs encode and decode video into digital format. The CODEC must be configured to encode the information at the desired end resolution. If the end user requires broadcast quality video the CODEC must support that level of quality. The CODEC should also be compatible with the desired data throughput rate of the Content Preparation element. (This can of course be overcome with sufficient buffering .) Several CODECs output information in a form which is directly compatible with distribution HW. Some are designed to output information in DS3, ATM OC3, or Fiber Channel. The Pacific Bell "Cinema of the Future" project utilizes a HDTV CODEC. The analog HDTV signal is digitized and compressed to a DS3 rate (44.7mhz) by Alcatels 1741 CODEC. The CODEC imposes a Discrete Cosign Transform (DCT) hybrid compression algorithm with compensation for video motion. Though the precise algorithm performed by the 1741 is proprietary the following is a overview of the process: Pixel groups called blocks are translated into frequency information using the DCT (similar to a Fourier transform). Next a Quantization step drops off the least significant bits of information. These coefficients are then "entropy- encoded into variable bit length codes. This digital information , now 1/50 of its original size can be passed onto a output mechanism (HW or SW driver ). This is of course just a quick overview, the process for encoding information has been fairly well documented by the ISO. Object Oriented Database Management Systems In order to setup a searchable database of these MPEG objects several companies are introducing Object Oriented Data Base Management Systems (ODBMS). These systems can either be coupled with the Media Server element or Content Preparation element of the VOD system. It would be ideal if all ODBMS spoke the same language so that information could be exchanged between data bases. A common query language would be advantageous, but established standards such as SQL do not adequately address Video Objects. Illustra has added Object-Oriented extensions onto ANSI- SQL. These extensions are then used to create "DATABLADES" which provide image handling and manipulation capabilities. Since this architecture uses SQL it is more likely that third party front end Authoring software will be compatible with Illustra. (Interoperability 10/95'). Encoding Verification If the VOD server is seen as a central library of video files, with multiple users archiving files and other users retrieving files; the requirement for format standards is evident. There is then, also a requirement to verify that these format standards are being met. This verification usually falls upon the content Preparation element of a VOD system. The natural medifore being that of a publisher ensuring that a book is legible and free of grammatical errors before releasing it to the public. ( This paper would probably be caught by such a publisher.) This auditing of compressed video information is not as straight forward. A particular video stream can flow through an MPEG-2 encoder without incident while a second stream will bog-down the system (possibly inducing errors). Rapidly changing backgrounds , like sports coverage can cause problems.. The MPEG-2 standard is complex and requires more than just an astute systems engineer to ensure that equipment designers of the encoders have not interpreted the MPEG standard differently (from the decoder designers). Hewlett Packard suggests that the industry needs to consider testability as a primary requirement of VOD systems. One way to resolve encoding concerns could be to create standardized test that carefully verify the implementation of the MPEG standard. Bit error rate testers can test transport layers, traditional data analysis tools can also be used to build new test tools for MPEG. It should be no surprise that testability is the last area of standardization for the VOD marketplace. Summary Preparing video information for VOD archiving has reached a point that developers are able to concentrate on accelerating the compression phase. The compression techniques are relatively well documented. The industry is now addressing how to implement them faster; HW vs. SW, Digitizing Cameras vs. DSP cards. Most experts agree that even though today's workstations have the processing power to perform the MPEG compression it is usually more efficient to perform as much processing in HW (like dedicated video cards) as possible. This is not always the case in Multimedia applications where the end product (do to BW limitations) is not really Broadcast Quality . Quality of Imagery the user expects is also a major consideration in selecting a content preparation element. If the user cannot take advantage of a hi-resolution 2k X 2k image; or if the BW of the distribution network is limited; then a hi-resolution MPEG-2 CODEC might not be justified. If the CODEC implements the "Spatial scalabilty" capability of MPEG-2 then the encoder provides the video in a two part format. This lets low-resolution decoders extract the video signal and with additional processing in more capable decoders, a high resolution picture can be provided. Video Server Requirements Once the content is uploaded to the video server in the content preparation phase, and registered appropriately in the database, it becomes available for the end user. In order for this data to be available and viewable by the end user the server should have at least a Raid 5 SCSI controller, 4GB Hard Drives with 7200 RPM, and a high speed network interface. The server should support MPEG-2 compression at 4.0 Mpbs to deliver approximately 28 hours or 96 Hours of MPEG-1 compression of 30-fps, 640-by-480 pixel video on demand which equates to a minimum of 50 GB of Hard disk space. The server should employ RAM in order to buffer the data being received from the disk drive to ensure a smoother transfer of the video to the end user. A minimum of 256MB is recommended. The server should be able to handle MPEG-2 and MPEG-1 in NTSC, PAL or SECAM video formats and be able to meet broadcast and cable requirements for on-air program applications and video caching. Compression Method * Storage Required in Mb per 30 Second video clip Storage Required in Mb per 60 Second video clip Total Capacity 52GB HDD Holds MPEG-1 @ 1.2 Mbps 36 72 96.3 Hours MPEG-2 @ 4 Mbps 120 240 28.8 Hours * Assumming the standard compression ratio per method type. Limitations There are several major limitations that must be addressed in order to understand why the above requirements are imposed. 1) Storage--There appears to currently be a storage limitation on video servers because of retrieval and transmission time associated with video. Multiple servers will be needed to store and retrieve from large archives of video information. These servers should be distributed remotely to maximize local retrieval and viewing while minimizing WAN traffic. 2) Data stream--in order to view video information with a minimum of latency and without jitter the data stream needs to be constant and uninterrupted (with the exception of some buffering as necessary). There are several forms of buffering: a) Media stream storage on hard disk. b) cached at the transmit buffer c) network transit latency and buffers may be viewed as another buffer. d) the receive end may buffer a sufficient amount of the media stream to maintain a continuous stream for display and suitable synchronization with the transmit end. 3) Concurrent users--The video server should be limited to 100 concurrent users in order to ensure that each user is able to access the requested data as expeditiously as possible. 4) Network bandwidth size--The network needs to directly proportional to the number of simultaneous video streams. The bandwidth of the system is effectively limited by the bandwidth / transmission capabilities originating at the server. 5) Latency--Although hard to determine, there should be no more than 2 seconds for a video file retrieved locally and no more than 10 seconds for a video file retrieved over the WAN from a remote site. 6) ODBMS Products Several products that are currently being marketed as video servers are: 1) The Network Connection, M2V Video Server: a) 120 simultaneous 1.2 Mbps MPEG-1 video streams b) 112GB, RAID 5 storage. c) In excess of 200 Hours MPEG-1, and 60 Hours MPEG-2. d) Supports JPEG, M-JPEG, DVI, AVS, AVI, Wavelte, Indeo and other video formats. e) Supports Ethernet, Token Ring, FDDI and ATM. 2) Micropolic Corp, AV Server: a) 16 Mpeg-2 Video Decoder Boards with 4 Channels per card is 64 channels at 6Mbps per channel. b) 252GB, Raid storage. c) In excess of 120 hours MPEG-2 d) Supports only MPEG-2 3) Sun Microsystems, Media Center 1000E Video Server: a) 63GB, RAID4 storage. b) In excess of 32 Hours MPEG-2, and 81 Hours MPEG-1 c) Supports MPEG-1 and MPEG-2 d) Supports ATM and Fast-Ethernet Distribution Network: Video on Demand (VOD) requires predictability and continuity of traffic flow to ensure real-time flow of information. MPEG and MPEG-2 (as described above) require an effective BW of 1.5 - 4 Mbits/sec. Multiplying this "media stream" BW requirement by the number of clients will give a rough estimate of the effective distribution networks bandwidth. The Common Imagery Ground/Surface System (CIGSS) 1 Handbook suggests the following steps to size and specify the LAN technology use for Image dissemination systems: 1. Approximate the system usage profile by estimating the amounts of image, video and text handling that will be required. 2. Convert the amount of images, video and text to be processed into average effective data rates. Raw data transferred directly to an archive ( our video server) and near real- time processed imagery should be estimated separately. The bandwidth requirements can be combined later if needed. 3. Adjust calculated rate for growth. The growth factor should be at least 50%. 4. Add a fraction (about .3 to .4) of the peak capacity to the growth adjusted rate for interprocessor communications. Updating heritage networks to this new BW requirement can incur substantial costs. The cost of implementing a hi-speed network varies depending on the network architecture. LAN Types Several LAN architectures are being used in "trial" VOD systems. ATM, FDDI token ring and even variations of the Ethernet standard can provide the required 10-100Mb/sec BW. A version of Ethernet called switched Ethernet can provide up to 10Mbps to all clients. Since this is a switched architecture the full 10 Mbps can be available to each client. This architecture provides the quickest most cost effective method of upgrading legacy systems since it does not require upgrade of existing 10baseT wiring. A voice grade Ethernet 100VG-AnyLAN can also be implemented in a VOD system. This architecture however will require some cable upgrades from CAT 3 to CAT 5. Ethernet 100VG is expected to "top-out" at 100Mbps, no further upgrades are foreseen. Token ring networks have been implemented in a few VOD trail systems. FDDI can be setup to provide 100Mbps and because of the Token-ring architecture, the network can specify BW for each client. A simulated system, described in the Sept '95 edition of Multimedia Systems would be capable of handling 60 simultaneous MPEG-1 video streams. The video server (486DX) not the 100-Mbit/sec token ring limited the system size. This is of course a small system, and due to the "shared" nature of a token ring FDDI architecture , it should not be implemented for larger (1000+) systems. ATM provides the highest BW and probably the most expensive network solution. ATM provides the proper class of service for video on demand applications. ATM connections running at OC3 rates (155Mbps) are currently priced at approx. $300-$500. ATM is not a "shared" topology. BW is not dependent on the number of users. In fact, as the number of users on an ATM net is increased, the effective BW of the ATM network increases. ATM can have hundreds of services operating simultaneously; voice, video, LAN and ISDN. These services can all be guaranteed, and assured that they won't interfere with each other. The LAN marketplace is currently providing 155Mbps products. Some of the ATM forum leaders (such as FORE systems) are also providing 622Mbps (OC12) network interface cards (NICs). The problem is that ATM is a relatively new protocol. Several companies have come together to form the ATM Forum, to help standardize the architecture. For most network application software the cell-based ATM layer is not an appropriate interface. The ATM adaption layer (AAL) was designed to bridge the gap between the ATM layer and the application requirements. The Forum's efforts have been very successful at the lower ATM adaptive layers but some interoperability issues still exist. The American ATM Forum has standardized on ATM AAL 5 to map MPEG-2 for transport. While the European ETSI has chosen AAL2. These inconsistencies effect the transport of multimedia only through ATM LANS. Protocols There are several transport protocols that can be implemented for audio-video applications; TCP, UDP, SONET, TCP/IP Resource Reservation Protocol (RCVP) and IPX/SPX. Do to the effective data rate necessary to support VOD, protocols that minimize client/server interaction are preferable, except in cases where an over-abundance of network bandwidth exists. In ATM nets supporting mostly non-VOD applications retransmission of lost packets or corrupt packets will not be possible. For example, if cells are lost the Fore Systems AVA Real-time Display SW uses pixel tiles from a previous frame. In a typical VOD system , without error correction, QOS is directly proportional to network/LAN BER (Bit Error Rate). VOD systems which provide error correction as part of network protocol have to be designed to allow for the latency created by their error correcting protocols. (DSS currently implements interleaving, Reed Soloman and viterbi decoding) QOS trade-offs can be quantified and analyzed (see " QOS control in GRAMS for ATM LAN", IEEE Journal of Selected Areas in Communications, by Joseph Hui) Networking, DBMS and server companies have been adopting upper layer protocols to VOD processes. Oracle Media Net utilizes a "sliding window" protocol. Sliding Window protocol is a well established methodology for ensuring transmission over lossy data links. Medianet monitors the response between client and server, lengthens the response checking time to the point of error and then backs off. (This process theoretically diminishes disruptive latencies ) . Novell developed the Novell Embedded System Technology (NEST) and Netware to run over IPX/SPX protocols. The Novell implementation provides prioritization for video users. Flow control from the client to the server does not yet exist. (Interoperability, 10/95). WAN Types Distributing VOD information outside the LAN requires either a very high bandwidth WAN with guaranteed availability, or substantial buffering and latency allowances at the client in order to ensure and maintain a constand display of data. When many people think of information distribution over a WAN, sourced by many different servers, to many isolated users; the Internet naturally comes to mind. The Internet was used by the National Information Infrastructure (NII) workshop as a model for the delivery of video services. This commercial organization conference in addition to supporting HDTV and DSS , is interested in providing VOD services to "all Americans". The Internet was seen as a good first attempt for distributing information. The Internet is inexpensive, requires no gatekeepers, provides search utilities and has several proven Human Machine Interfaces (HMIs). Unfortunately the Internet is also bandwidth limited, provides insufficient: traffic control, security, directories and no guaranteed delivery functions. The Internet may not be the solution to the VOD distribution problem, but it will expedite the development of an open architecture commercial VOD WAN. Commercial enterprises have been considering hybrid fiber/coaxial cable as one possible solution. This implementation also referred to as "fiber to the curb" requires a partial upgrade to existing telephone distribution infrastructures. Signals are transmitted over fiber to a neighborhood distribution (Gateway) point. The signals are then either converted to RF and sent to the User (home) via coax, or converted to a lower data rate network Interface and sent onto the home. The RF implementation requires the "Set-Top Box" for decoding the RF , The latter could be a PC implementation. ISDN-B the broadband version of ISDN will probably evolve as the leading WAN technology. Narrowband ISDN is already an excepted method of providing the higher serial data rates necessary for minimal quality multimedia applications, like teleconferencing. True motion picture quality VOD implementations will require the Mbps data rates that should be provided by ISDN-B. The DOD has also been interested in the distribution of video and imagery across WANs. The Defense Airborne Reconnaissance Office (DARO) has developed the Common Imagery f:\12000 essays\technology & computers (295)\videogame.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ VIDEOGAME: The High Tech Threat to Our Younger Generation Anyone who has ever walked through a shopping mall on a weekend knows how popular videogame arcades have become with our young people. It is becoming a force in the lives of millions of kids all across America. Parents and teachers become more concerned and worried when they see their kids devoted to videogames. They are highly concentrated because vidiogames greatly influence the mental and learning processes of the younger generation. Many parents believe that their children learn values more from the mass media rather than their from homes. Generally speaking, the video and computer game industry has been a growing concern to the religious groups, responsible politicians and bewildered parents for the disturbing contents and the substandard themes in some of its games. The videogame technology must be recognised for its role and influence on the younger generation because, for better or worse, it clearly affects their academic and social life. Indeed, statistics are really alarming on the videogame industry. It is a multi-million dollar business growing at 40 per cent a year from 1987 to 1993 (Palmeri 102). Tetzeli in his article "Videogames: Serious Fun" compares videogames $ 6.5 billion--a--year business to the Hollywood film industry (110). He continues to point out that two Japan based conglomerate have put about 64 million videogame machines in US households in total. In addition to that they also produced and licenced for all their softwares for their machines (110). Palmery estimates to produce and market a ful featured videogame it would costs up to $10 million (102). Because of the cost producers attempt to make a return on their investments and earn as much profits as they can. To achieve their goals, they feature more blood, gore and human dismemberment in their games to appeal to the younger generation because violence sells. According to Palmery the game Mortal Kombat has sold a record 5 million copies for about$65 apiece.(102) The advanced technology in upcoming videogame machines even allows the players to interact with screen images in ways never before possible. Analysts in this field say that it is only a prelude to the emerging world-wide network popularly known as the electronic information highway( ). Two of the Japan's formidable corporate giants, Sega of America Inc., and Nintendo of America Inc., are a real force behind the growing phenomenon. `The world wide home--videogames marketwhich they dominate is worth arouwnd $20 billion, of which about two-thirds represents thegames themselvesand one third the machines theyare played on....Their empires are based on a manufactureing and distribution system built around cartridges and dedicated machines (Massacre 71). Their battle for the market share and the massive multi-billion dollar world wide market and their expensive advertisement battles have attracted the public attention(Hulme 20). For instance, ten million marketing budget and the publicity fuel a national debate on Videogame violence which obviously helped Mortal Kombat finish the year as the top selling Videogame of the year is the another success story. The game should bring $150 million in one year revenue by the end of 1994(20). Who else is to blame other than the technology itself for the public outcry against the violence and sex in video games? Some computer experts say that with a modem, most videogames could be accessible, just like making airline reservations and holding library books. It is that easy. Evans, a concerned mother of two boys, complains, "You see, those mothers know when their kids go to the mall or some place like that, they won't be able to buy cigarettes, or alcohol or pornographic magazines." Evans continues, "But kids can walk into any movie rental store and pick up one of these violent video games, nobody will say no. The parents feel like they lost control with thesegames" (Browning 691). Alain Jehlen says, "Our children are spending countless hours with these machines, their eyes glued to the screen, their fingers madly pressing buttons to meet contrived on-screen challenges" (74). Nowadays, technology has become even more advanced and is able to produce CD quality stereo sound and three dimensional images without any jerky movements. It is almost like a short action film than the sticky figures of the computer programs of the mid eighties (Source). For now, the overall sales of 16-bit hardware systems have slowed dramatically as videogame players wait for the next generation of 32-bit and 64-bit systems due next year from Nintendo, Sega and Sony Corporations (Fitzwrald **). For now, the main issue is violence. Richard Brandt, in his essay titled, "VIDEOGAMES: Is All That Gore Really Child's Play?" paints some graphic pictures of the hard core violence( ). While Nintendo, the trend and price setter of the industry acknowledges that two thirds of its consumers are under 15, they released a game called Street Fighter II, which features a character gnawing on opponent's head with all the usual violent formula. In its another massive hit 'The Mortal Kombat', each time a fighter lands a good punch or kick the victim emits bright red animated blood flying . Players can win a fatality bonus. The winner might knock off and rip out a still pulsating heart by hand or tear off opponents head and hold up victoriously the spinal cord dangling from its neck (Brandt 38). Brandt also observed, a 19-year old playing Mortal Kombat, in San Francisco arcade, enjoys the fact that "it doesn't look fake. It's a lot more real with all the blood and stuff"( ). What a taste! What is the purpose of videogame producers other than making money by exploiting animal desire and pure violence in the mind of our younger generation?(38). In fact it appears to be a growing demand for more violent sports games as well. Bing Gorden, a senior vice president of Electronic Arts Inc.., revealed when he recently tested a new hockey game with 25-year-old. They demanded, "Where is the blood?"( 38). Tetzeli points out that up until now the videogames are only a boys game. So far girls have not been a factor. Most of the titles targeted only male audiences and are based on boys themes such as street fighting, car racing, and football. But now Sega of America's CEO, Kolinske has diverted his attention on women segment and set apart a team which comprises of well known female marketers and game makers to produce games according to the femenine taste. In that way they strategically targeted to enter into our living room like that of the television (116). "It is a cultural disaster," lamented a successful producer of NOVA, Mr. Jehlen, a popular science documentary and an accomplished writer, "but it does not have to be a negative force. While most of the games of today's market show shooting and kicking and not of much thought, the videogame format has tremendous potential"(74). He believes that a video game can be a mental exercise machine(Jehlen 74). Since the videogame industry is a relatively fast emerging industry, scientists have performed relatively little research on this area. However, while some condemn it outright, others endorse it conditionally. Unfortunately, definite answers are not yet available despite few complete articles on this subject, as it is a fairly new field of research. Researchers claim that video games can help develop necessary problem solving abilities, pattern recognition, resource management, logistics, mapping memory, quick thinking, and reasoned judgements( ). Helping to learn when to fight and when to run actually helps in real life situations (207). Brody honestly believes that videogames could give children a sense of mastery. For them success become like an addiction, and each time the games nourish them with constant doses of small successes which they deserve to become "confident citizens"(53). A senior fellow of Manhattan Institute, Peter Huber overwhelmed many people by testifying that his 6 year old daughter learn basic musical notations by shooting ducks in the electronic key board which is linked with a MIDI and to a software that runs on another computer. Huber testifies, "television -even Sesame Street- holds no interest for her at all. What I see in her experience is the face of learning transformed, almost beyond recognition". This proud father further admonishes, Don't let your children (or may be grandchildren) miss the train (182). Referring to the findings of some doctors, Sheff points out that playing games has the power to soothe any kind of pain for two reasons. First, the more they interact with an undivided attention towards a game, it is clinically proven that all kinds of pain and every thing else considerably vanish. Second, the player's highly excited state of mind generates a steady flow of a "feel --good--chemical called endorphin into their blood stream. Endorphin is known for its natural suppressant for pain and develops a sense of euphoria. Playing games like Nintendo can create a sort of high, like that of jogging (204). In an interview with the Information System and Computer Science department chairperson and his staff with a working knowledge of interactive multimedia and computer generated games, they acknowledge that the modern information superhighway is clearly a link to the world and it is here to stay and cannot be ignored. Dr. Ellis Brett, recalls that a decade ago, we lived in a world of isolation with TV and mainframe computers, but now we live in the digital age', an age that bring virtually every thing into a cartridge or a CD. He occludes, "A teacher, a desk, and book are not adequate any more". This point bring us back to the original thesis, that the videogame technology must be recognised for its role and influence on the younger generation because, for better or worse, it clearly affects their academic and social life. The problem here, however, is not the educational or entertainment games but with the violent and substandard ones. Many parents see their children learning the values from mass media and video parlours rather than from schools or churches or homes. According to Browning there are two facts that are disturbing and have emerged from the congressional debate over violence in the media. The first one is that most parents feel that they are engaged in a battle with computer technology and other reason is that some parents apparently feel they are losing the battle (691). In the April 1994 issue of Marketing Age , Kate Fitzgerald reports that the growing angry to the public outcry during 1993 forced the Nintendo to reduce the most violent scenes from Mortal Kombat's home version. But Sega, and the other America's very own competitor Atari refused to reduce the violent content. As a result, the No 1 Nintendo reduced to No 2 and lost its market share and millions of dollars beyond recovery. Uncompromised to the public fury the Sega took the No 1 position in the industry. Hayao Nakayama, president and CAE of Sega Enterprises says: Unfortunately, Nintendo is going down." Nintendo's genuine efforts backfired. Instead of deterring blood lust, it drove more than 1 million action hungry teenagers to its rival Sega. Which afford the pure hard core violence of the original arcade game. This year (1994) Nintendo realised its hard earned lessons and its past 'mistakes' and compromise with the moral responsibilities and expected to pick up significantly but to become its old position to No 1 remains questionable.(Fitzgerald 3) One latest development , as part of a new industry policy urged by congress and by the arm twisting tactics , both the companies have voluntarily' added on package messages warning some content may not be suitable for players under 17. In reality, despite evidence that such warning some times increase sales of violent video games. (Fitzgerald 3] It is a good sign and relief that is the past few months, the ever growing violence of the videogame has swept over even congress. Herbert H. Kohl, D-Wis., the chairman of the senate Judiciary Subcommittee of Juvenile Justice , and Joseph I. Liebermann, D-Conn., the chairman of the Senate Government Affairs Subcommittee on Regulation and Government Information, looked beyond television, to violence in video game(Browning 691). During a recent (1993) congressional hearing Senator Herbert Kohl announced that if the video game industry does not monitor its contents, congress will. Kohl and other senators are co-sponsoring a bill that facilitates and gives one year ultimatum to the videogame industry to create a set of standards that likely include industry wide ratings. The latest development during March 4 1994 is that because of the pressure rendered by the publics and other interest groups and the congress the game makers voluntarily come forward to announce the creation of the Industry Rating Council and embraced self regulations ( ). The New York Times dated 15th June 1994 reports that the Industry's principal trade group announced two important news recently. The good news is that the computer games industry will develop a rating system to voluntarily label the amount sex and violence in about 2000 new games that reach the market each year. According to the source of the industries trade group, the bad news is unfortunately the 5,000 computer games already in stores, Mr Wasch reports, would not rated(Rating 36). Bob Garfield, an expert on videogames software in his regular column in the Advertising age angrily exposes that the hideous manipulation of children's psyches is a disgrace and further charges the industry for aiming to 'Be heard by exploiting kids' distress(21). During the on-line interview with the Garfield through the Ad Age Bulletin Board Service (bbs) on Prodigy at EFPB35A, he responded by reiterating the same idea with more statistical data. To some extend he sounds reasonable and his comments are logical but his criticism on the U.S base Atari and other similar producers are concerned he is more of a business oriented and patriotic. He seems to be concerened more about the financial point of view than the cultural and moral aspects. The question remains who judge the culprit? Should it be the culprit themselves or a responsible government agency?. It is a sheer mockery to the censor board and the justice system as a whole. There should be an equality under the law. Cinema and video game or any other media in this matter should be considered equally under the law. Videogame industry is not only controlled by the two largest Japanese corporations such as Nintendo and Sega but it also severely affect the very fabric of the younger generation and the society as a whole economically. The government should take the moral responsibility to curb the illicit effects on its future citizens by establishing a uniform code of rating system all over its 50 states like that of the censorship on cinema and other popular media. For example, in all the Hawaiian islands the stores that are selling fake guns, combat 'videogames' and other war toys may be forced to post warnings to the extend the general public to be informed about the playthings that can increase "anger and violence" in children. The state of Hawaii have been passed a bill which would require the stores to place signs on shelves in stating : "Warning. Think before you buy. This is a war toy. Playing with it increases anger and violence in children. Is this what you really want for your child? (WAR TOYS). Which may not be very effective altogether to control the vidiegames with violent contend. But still the warning gives a chance and may be the parents pause a moment before they decide to buy any thing for their offspring. Voluntary rating system or any other form of self regulatory arrangement will only help to widen the loop holes of the existing system. By including this multi billion dollar industry under the existing film rating system or something similar to that would greatly reduce the risk of violence and ultimately would prevent the youths on turning for violent solution for all their problems. And also would help to form a violent free life style and prevent the younger generation and spend their quality time with their studies and parents. All other arrangements will, at least help us to further delay the process of controlling the emerging violent theme and content of the many thousands of videogames yet to be produced or released. f:\12000 essays\technology & computers (295)\Virtual Reality 2.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Virtual Reality - What it is and How it Works Imagine being able to point into the sky and fly. Or perhaps walk through space and connect molecules together. These are some of the dreams that have come with the invention of virtual reality. With the introduction of computers, numerous applications have been enhanced or created. The newest technology that is being tapped is that of artificial reality, or "virtual reality" (VR). When Morton Heilig first got a patent for his "Sensorama Simulator" in 1962, he had no idea that 30 years later people would still be trying to simulate reality and that they would be doing it so effectively. Jaron Lanier first coined the phrase "virtual reality" around 1989, and it has stuck ever since. Unfortunately, this catchy name has caused people to dream up incredible uses for this technology including using it as a sort of drug. This became evident when, among other people, Timothy Leary became interested in VR. This has also worried some of the researchers who are trying to create very real applications for medical, space, physical, chemical, and entertainment uses among other things. In order to create this alternate reality, however, you need to find ways to create the illusion of reality with a piece of machinery known as the computer. This is done with several computer-user interfaces used to simulate the senses. Among these, are stereoscopic glasses to make the simulated world look real, a 3D auditory display to give depth to sound, sensor lined gloves to simulate tactile feedback, and head-trackers to follow the orientation of the head. Since the technology is fairly young, these interfaces have not been perfected, making for a somewhat cartoonish simulated reality. Stereoscopic vision is probably the most important feature of VR because in real life, people rely mainly on vision to get places and do things. The eyes are approximately 6.5 centimeters apart, and allow you to have a full-colour, three-dimensional view of the world. Stereoscopy, in itself, is not a very new idea, but the new twist is trying to generate completely new images in real- time. In 1933, Sir Charles Wheatstone invented the first stereoscope with the same basic principle being used in today's head-mounted displays. Presenting different views to each eye gives the illusion of three dimensions. The glasses that are used today work by using what is called an "electronic shutter". The lenses of the glasses interleaveÔe inflating air bladders in a glove, arrays of tiny pins moved by shape memory wires, and even fingertip piezoelectric vibrotactile actuators. The latter method uses tiny crystals that vibrate when an electric current stimulates them. This design has not really taken off however, but the other two methods are being more actively researched. According to a report called "Tactile Sensing in Humans and Robots," distortions inside the skins cause mechanosensitive nerve terminals to respond with electrical impulses. Each impulse is approximately 50 to 100mV in magnitude and 1 ms in duration. However, the frequency of the impulses (up to a maximum of 500/s) dependsÔoration simulations. Such things as virtual wind tunnels have been in development for a couple years and could save money and energy for aerospace companies. Medical researchers have been using VR techniques to synthesize diagnostic images of a patient's body to do "predictive" modeling of radiation treatment using images created by ultrasound, magnetic resonance imaging, and X- ray. A radiation therapist in a virtual would could view and expose a tumour at any angle and then model specific doses and configurations of radiation beams to aim at the tumour more effectively. Since radiation destroys human tissue easily, there is no allowance for error. Also, doctors could use "virtual cadavers" to practice rare operations which are tough to perform. This is an excellent use because one could perform the operation over and over without the worry of hurting any human life. However, this sort of practice may have it's limitations because of the fact that it is only a virtual world. As well, at this time, the computer-user interfaces are not well enough developed and it is estimated that it will take 5 to 10 years to develop this technology. In Japan, a company called Matsushita Electric World Ltd. is using VR to sell their products. They employ a VPL Research head-mounted display linked to a high-powered computer to help prospective customers design their own kitchens. Being able to see what your kitchen will look like before you actually refurnish could help you save from costly mistakes in the future. The entertainment industry stands to gain a lot from VR.Ô f:\12000 essays\technology & computers (295)\Virtual Reality.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Joe Blige Virtual Reality Virtual reality as of recent, while still extremely new, has become the topic of many opposing viewpoints. It has caught the eye of the general public for several reasons. Perhaps, this is mainly because of all the possibilities which virtual reality creates. Note that the possibilities are not pre-determined as either good or bad, mainly because there are many different opinions to the future of this developing technology. However, despite the controversy this new technology has aroused, society should not remain skeptical. Virtual reality has the potential, if used correctly, to become a great technological advancement that will aid society in many ways. In the past, virtual reality has been nothing more than a small step beyond video games. However, it is now apparent that this technology can be used for more practical purposes. These purposes include national defense, surgical procedures and various other applications. Society has not fully acknowledged the benefits of virtual reality as of yet because it is still under development. The reason for virtual reality remaining in its development for so long is mainly due to its complexity. The hardware that has developed so far is unable to make the large calculations required by a virtual reality based machine. However, as apparent in recent years, technology is advancing at an extreme rate. This is another reason why society's hopes for virtual reality should and have remained unwaivered. In Orenstein's story, she gives the perspective of the average citizen who is obviously uncertain about the uses and/or affects that virtual reality will have upon society. The show she attended was quick to point out the practicality of virtual reality however, it still left much to be desired. It seems that Orenstein was disgruntled when she came to an exhibit and the topic of cyber-sex was raised. Perhaps it wasn't just that it came up but more like how it came up. The idea of a man and woman being in a virtual world and a man fondling the womans breasts was probably, although very much possible, not a great first impression. It gave Orenstein the opportunity to explore the evils that virtual reality makes possible. After a while, Orenstein realizes that just like the computing age has hackers, the virtual age will have it's own high-tech delinquents. You can't prevent technology from being abused. There will be those who use VR rudely, stupidly, dangerously--just as they do the telephone or computer. Like the telephone and the modem, its popular rise will also eliminate the need for certain fundamental kinds of human contact, even as it enhances our ability to communicate. (Orenstein 258) Here she is quick to point out that because virtual reality is such a new technology it is extremely possible for hackers to have their way with it. Perhaps she also points out that in order for society to accept this new technology they will have to accept it's risks as well. In the government's perspective use of virtual reality it is easy to see how this technology proves useful. Supposing that the United States got into a war, by using virtual reality pilots instead of real pilots the number of casualties would obviously be less. Pilots would fly their aircraft from a remote location via video and audio equipment in the form of virtual reality. As technology increases over the next several years it will become easier and easier for the pilots to fly planes from a remote location. However, despite all the lives this may save there is a down side. The down side being that perhaps this will stimulate the government to react more easily in a violent way. Without any loss of lives the only thing the government has to lose by attacking are the cost of planes. Keeping this idea in mind, it is very likely that the US will spend less time negotiating and more time fighting. This is most definitely a negative side-affect of virtual reality because it will weaken the relationship that the US has with other countries. Integrating virtual reality with society is where the majority of problems occur. It is clearly apparent that because this technology is so new society is unsure how it will fit in. This is also a good example of why people's opinions are so varied. Some people see virtual reality as just another tool which will aid society in several ways. Others see it as dominating society all together and affecting everyone's lives everyday. It obviously has the potential to be both and it is easy to see why people are so hesitant to decide. Perhaps another reason for society's lack of optimism is their fear that they will somehow be removed from actual reality. Although quite ironic, for a long time society has had a fear that technology will someday take control of their lives. Perhaps the idea of technology becoming so advanced that people will no longer be able to tell whether they are in virtual or actual reality. It is clear that technology has definitely affected society in recent years. However, it is quite difficult to predict the role of technology in the future. The potential for technology is certainly there, it just needs to be focused it the right direction. Technology most definitely has the ability to run out of control. Just the idea alone, of man creating technology and having it run out of control is something society has been fascinated with for many years. Books and movies depicting technology overwhelming society have been created with much of this idea in mind. Perhaps it is possible that virtual reality will be that technology which man is unable to control and will take over all of society. If this is the case, society and the people within it would become uncertain if they were in virtual or actual reality. It must be pointed out however, due to the nature and precaution of society in general, it is very unlikely that anything like this will ever actually occur. If society is intelligent enough to invent such a technology it should be able to determine and control it's consequences. Orenstein brings up a good point when she says, "This time, we have the chance to enter the debate about the direction of a revolutionary technology, before that debate has been decided for us"(258). Often times in the past, society as a whole has been subject to decisions made by those of the creators of new technology. In this quote however, Orenstein points out that with this technology people should not only try but make it a priority to get involved. She, as many others do, see this technology as having a huge amount of potential. Without the direction and influence of society upon virtual reality it could go to waste, or even worse, turn into society's enemy of sorts. Towards the end of the story she tries to depict how virtual reality will have an impact upon society whether they like it or not. As I rode down the freeway, I found myself going a little faster than usual, edging my curves a little sharper, coming a little close than was really comfortable to the truck merging in the lane ahead of me. Maybe I was just tired. It had been a long night. But maybe it just doesn't take the mind that long to grab onto the new and make it real. Even when you don't want it to. She depicts that no matter how much society is aware of virtual reality, the human brain still has instincts that cannot be controlled. That is one of the drawbacks of virtual reality. That no one is sure what to expect. Just as with any other technology, the only way to find out the results of virtual reality are to test the limits. Knowing that virtual reality has the ability to affect so many people in such a large number of ways there needs to be some kind of limitation. This brings up another key controversy as to who should be in control of limiting this virtual world. If the government is in control it could likely be abused and mishandled. However, if society as a whole is left to contemplate its uses, the affects could be either good or bad. Although society knows a lot about virtual reality there is still so much that it doesn't know. Perhaps in the coming years, new technology will come out and people will learn more about this virtual world. However, until that time, the questions will remain numerous and doubtful yet the possibilities are unlimited. f:\12000 essays\technology & computers (295)\Virtual Reality1.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Today Virtual reality allows people to study artificial worlds through simulation and computer graphics. Computers have changed the way we perform flight training, scientific research and conduct business. Flight simulators have drastically reduced the time and money required to learn to fly large jets. One of the most interesting capabilities of virtual reality is the ability to practice certain medical practices. Computers are helping many doctors perform complicated operations very simply. Computers have changed the way we look at health problems. They have made incurable health problems very easy to solve in today's society. We have only begun to realize the extreme wastefulness of burning expensive fuel in aircraft in order to learn something in an hour that could be taught in ten minutes in a simulator. Simulators have come a long way since 1929, when Ed Link first built what was soon to be known as the pilot maker, or more affectionately, the blue box. Students often find themselves sitting at the end of a runway waiting for takeoff clearance on a busy day, with the engine turning and burning expensive gas. This is not a very effective way for students to spend money. Most students do not have access to expensive flight simulators. Most have to travel hundreds of miles to take advantage of these amazing simulators. Flight simulators are much better than an airplane for the simple reason that in a simulator the learning environment is much safer. Students are able to avoid the overriding need to keep the airplane flying and out of harm's way. In a simulator a student is constantly busy, practicing what he is supposed to be learning, and once he's flown a given maneuver, he is able to go back and do it over again, without wasting time or fuel. Years ago doctors used X-rays to see the insides of humans. X-ray's were most helpful in finding broken bones. These machines were an incredible break through years ago. Today X-ray machines are hardly ever used. Today we use computer-aided volumetric images of internal organs, often referred to as cross-sectional images of the body's interior. In the past scars were often left behind after major surgeries. We have avoided leaving these nasty scars through fiber optics. If a patient needs surgery on an injured nee, the doctor would cut two small holes in the side of the patient's knee and glide the tiny light, camera, and operating tools inside. The doctor would be able to monitor what he was doing from a colored monitor screen. Virtual reality also allows leeway for doctor's mistakes. With virtual reality a student is able to try several different operations more than once. If the attempts are failures the patient will not be injured. Before virtual reality, students were often required to operate on animals. Because of virtual reality we are able to save money along with animals' lives. We have come a long way in virtual reality since World War II. We have been able to save time, money and many lives, in both medical and flight training. The human race has many new and exciting advancements coming because of virtual reality. I hope that one day this new advancement will not be used in war tactics, rather only be useful for practical purposes. f:\12000 essays\technology & computers (295)\Virus.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Are "Good" Computer Viruses Still a Bad Idea? Vesselin Bontchev Research Associate Virus Test Center University of Hamburg Vogt-Koelln-Str. 30, 22527 Hamburg, Germany bontchev@fbihh.informatik.uni-hamburg.de [Editor's note: Vesselin's current email address is bontchev@complex.is] During the past six years, computer viruses have caused unaccountable amount of damage - mostly due to loss of time and resources. For most users, the term "computer virus" is a synonym of the worst nightmares that can happen on their system. Yet some well-known researchers keep insisting that it is possible to use the replication mechanism of the viral programs for some useful and beneficial purposes. This paper is an attempt to summarize why exactly the general public appreciates computer viruses as something inherently bad. It is also considering several of the proposed models of "beneficial" viruses and points out the problems in them. A set of conditions is listed, which every virus that claims to be beneficial must conform to. At last, a realistic model using replication techniques for beneficial purposes is proposed and directions are given in which this technique can be improved further. The paper also demonstrates that the main reason for the conflict between those supporting the idea of a "beneficial virus" and those opposing it, is that the two sides are assuming a different definition of what a computer virus is. 1. What Is a Computer Virus? The general public usually associates the term "computer virus" with a small, nasty program, which aims to destroy the information on their machines. As usual, the general public's understanding of the term is incorrect. There are many kinds of destructive or otherwise malicious computer programs and computer viruses are only one of them. Such programs include backdoors, logic bombs, trojan horses and so on [Bontchev94]. Furthermore, many computer viruses are not intentionally destructive - they simply display a message, play a tune, or even do nothing noticeable at all. The important thing, however, is that even those not intentionally destructive viruses are not harmless - they are causing a lot of damage in the sense of time, money and resources spent to remove them - because they are generally unwanted and the user wishes to get rid of them. A much more precise and scientific definition of the term "computer virus" has been proposed by Dr. Fred Cohen in his paper [Cohen84]. This definition is mathematical - it defines the computer virus as a sequence of symbols on the tape of a Turing Machine. The definition is rather difficult to express exactly in a human language, but an approximate interpretation is that a computer virus is a "program that is able to infect other programs by modifying them to include a possibly evolved copy of itself". Unfortunately, there are several problems with this definition. One of them is that it does not mention the possibility of a virus to infect a program without modifying it - by inserting itself in the execution path. Some typical examples are the boot sector viruses and the companion viruses [Bontchev94]. However, this is a flaw only of the human-language expression of the definition - the mathematical expression defines the terms "program" and "modify" in a way that clearly includes the kinds of viruses mentioned above. A second problem with the above definition is its lack of recursiveness. That is, it does not specify that after infecting a program, a virus should be able to replicate further, using the infected program as a host. Another, much more serious problem with Dr. Cohen's definition is that it is too broad to be useful for practical purposes. In fact, his definition classifies as "computer viruses" even such cases as a compiler which is compiling its own source, a file manager which is used to copy itself, and even the program DISKCOPY when it is on diskette containing the operating system - because it can be used to produce an exact copy of the programs on this diskette. In order to understand the reason of the above problem, we should pay attention to the goal for which Dr. Cohen's definition has been developed. His goal has been to prove several interesting theorems about the computational aspects of computer viruses [Cohen89]. In order to do this, he had to develop a mathematical (formal) model of the computer virus. For this purpose, one needs a mathematical model of the computer. One of the most commonly used models is the Turing Machine (TM). Indeed, there are a few others (e.g., the Markoff chains, the Post Machine, etc.), but they are not as convenient as the TM and all of them are proven to be equivalent to it. Unfortunately, in the environment of the TM model, we cannot speak about "programs" which modify "other programs" - simply because a TM has only one, single program - the contents of the tape of that TM. That's why Cohen's model of a computer virus considers the history of the states of the tape of the TM. If a sequence of symbols on this tape appears at a later moment somewhere else on the tape, then this sequence of symbols is said to be a computer virus for this particular TM. It is important to note that a computer virus should be always considered as related to some given computing environment - a particular TM. It can be proven ([Cohen89]) that for any particular TM there exists a sequences of symbols which is a virus for that particular TM. Finally, the technical computer experts usually use definitions for the term "computer virus", which are less precise than Dr. Cohen's model, while in the same time being much more useful for practical reasons and still being much more correct than the general public's vague understanding of the term. One of the best such definitions is ([Seborg]): "We define a computer 'virus' as a self-replicating program that can 'infect' other programs by modifying them or their environment such that a call to an 'infected' program implies a call to a possibly evolved, and in most cases, functionally similar copy of the 'virus'." The important thing to note is that a computer virus is a program that is able to replicate by itself. The definition does not specify explicitly that it is a malicious program. Also, a program that does not replicate is not a virus, regardless of whether it is malicious or not. Therefore the maliciousness is neither a necessary, nor a sufficient property for a program to be a computer virus. Nevertheless, in the past ten years a huge number of intentionally or non intentionally destructive computer viruses have caused an unaccountable amount of damage - mostly due to loss of time, money, and resources to eradicate them - because in all cases they have been unwanted. Some damage has also been caused by a direct loss of valuable information due to an intentionally destructive payload of some viruses, but this loss is relatively minor when compared to the main one. Lastly, a third, indirect kind of damage is caused to the society - many users are forced to spend money on buying and time on installing and using several kinds of anti-virus protection. Does all this mean that computer viruses can be only harmful? Intuitively, computer viruses are just a kind of technology. As with any other kind of technology, they are ethically neutral - they are neither "bad" nor "good" - it is the purposes that people use them for that can be "bad" or "good". So far they have been used mostly for bad purposes. It is therefore natural to ask the question whether it is possible to use this kind of technology for good purposes. Indeed, several people have asked this question - with Dr. Cohen being one of the most active proponents of the idea [Cohen91]. Some less qualified people have attempted even to implement the idea, but have failed miserably (see section 3). It is natural to ask - why? Let's consider the reasons why the idea of a "good" virus is usually rejected by the general public. In order to do this, we shall consider why people think that a computer virus is always harmful and cannot be used for beneficial purposes. 2. Why Are Computer Viruses Perceived as Harmful? About a year ago, we asked the participants of the electronic forum Virus-L/comp.virus, which is dedicated to discussions about computer viruses, to list all reasons they could think about why do they perceive the idea of a "beneficial" virus as a bad one. What follows is a systematized and generalized list of those reasons. 2.1. Technical Reasons This section lists the arguments against the "beneficial virus" idea, which have a technical character. They are usually the most objective ones. 2.1.1. Lack of Control Once released, the person who has released a computer virus has no control on how this virus will spread. It jumps from machine to machine, using the unpredictable patterns of software sharing among the users. Clearly, it can easily reach systems on which it is not wanted or on which it would be incompatible with the environment and would cause unintentional damage. It is not possible for the virus writer to predict on which systems the virus will run and therefore it is impossible to test the virus on all those systems for compatibility. Furthermore, during its spread, a computer virus could reach even a system that had not existed when that virus has been created - and therefore it had been impossible to test the virus for compatibility with this system. The above is not always true - that is, it is possible to test the virus for compatibility on a reasonably large number of systems that are supposed to run it. However, it is the damaging potential of a program that is spreading out of control which is scaring the users. 2.1.2. Recognition Difficulty Currently a lot of computer viruses already exist, which are either intentionally destructive or otherwise harmful. There are a lot of anti-virus programs designed to detect and stop them. All those harmful viruses are not going to disappear overnight. Therefore, if one develops a class of beneficial viruses and people actually begin to use them, then the anti-virus programs will have to be able to make the difference between the "good" and the "bad" viruses - in order to let the former in and keep the latter out. Unfortunately, in general it is theoretically impossible even to distinguish between a virus and a non-viral program ([Cohen89]). There is no reason to think that distinguishing between "good" and "bad" viruses will be much easier. While it might be possible to distinguish between them using virus-specific anti-virus software (e.g., scanners), we should not forget that many people are relying on generic anti-virus defenses, for instance based on integrity checking. Such systems are designed to detect modifications, not specific viruses, and therefore will be triggered by the "beneficial" virus too, thus causing an unwanted alert. Experience shows that the cost of such false positives is the same as of a real infection with a malicious virus - because the users waste a lot of time and resources looking for a non-existing problem. 2.1.3. Resource Wasting A computer virus would eat up disk space, CPU time, and memory resources during its replication. A computer virus is a self-replicating resource eater. One typical example is the Internet Worm, accidentally released by a Carnegie-Mellon student. It was not designed to be intentionally destructive, but in the process of its replication, the multiple copies of it used so much resources, that they practically brought down a large portion of the Internet. Even when the computer virus uses a limited amount of resources, it is considered as a bad thing by the owner of the machine on which the virus is doing it, if it happens without authorization. 2.1.4. Bug Containment A computer virus can easily escape the controlled environment and this makes it very difficult to test such programs properly. And indeed - experience shows that almost all computer viruses released so far suffer from significant bugs, which would either prevent them from working in some environments, or even cause unintentional damage in those environments. Of course, any program can (and usually does) contain bugs. This is especially true for the large and complex software systems. However, a computer virus is not just a normal buggy program. It is a self-spreading buggy program, which is out of control. Even if the author of the virus discovers the bug at a later time, there is the almost untreatable problem of revoking all existing copies of the virus and replacing them with fixed new versions. 2.1.5. Compatibility Problems A computer virus that can attach itself to any of the user's programs would disable the several programs on the market that perform a checksum on themselves at runtime and refuse to run if modified. In a sense, the virus will perform a denial-of-service attack and thus cause damage. Another problem arises from some attempts to solve the "lack of control" problem by creating a virus that asks for permission before infecting. Unfortunately, this causes an interruption of the task being currently executed until the user provides the proper response. Besides of being annoying for the user, it could be sometimes even dangerous. Consider the following example. It is possible that a computer is used to control some kind of life-critical equipment in a hospital. Suppose that such a computer gets infected by a "beneficial" computer virus, which asks for permission before infecting any particular program. Then it is perfectly possible that a situation arises, when a particular program has to be executed for the first time after the virus has appeared on the computer, and that this program has to urgently perform some task which is critical for the life of a patient. If at that time the virus interrupts the process with the request for permission to infect this program, then the caused delay (especially if there is no operator around to authorize or deny the request) could easily result in the death of the patient. 2.1.6. Effectiveness It is argued that any task that could be performed by a "beneficial" virus could also be performed by a non-replicating program. Since there are some risks following from the capability of self-replication, it would be therefore much better if a non-replicating program is used, instead of a computer virus. 2.2. Ethical and Legal Reasons The following section lists the arguments against the "beneficial virus" idea, which are of ethical or legal kind. Since neither ethics, nor the legal systems are universal among the human society, it is likely that those arguments will have different strength in the different countries. Nevertheless, they have to be taken into account. 2.2.1. Unauthorized Data Modification It is usually considered unethical to modify other people's data without their authorization. In many countries this is also illegal. Therefore, a virus which performs such actions will be considered unethical and/or illegal, regardless of any positive outcome it could bring to the infected machines. Sometimes this problem is perceived by the users as "the virus writer claims to know better than me what software should I run on my machine". 2.2.2. Copyright and Ownership Problems In many cases, modifying a particular program could mean that copyright, ownership, or at least technical support rights for this program are voided. We have witnessed such an example at the VTC-Hamburg. One of the users who called us for help with a computer virus was a sight-impaired lawyer, who was using special Windows software to display the documents he was working on with a large font on the screen - so that he could read them. His system was infected by a relatively non-damaging virus. However, when the producer of the software learned that the machine was infected, they refused any technical support to the user, until the infection was removed and their software - installed from clean originals. 2.2.3. Possible Misuse An attacker could use a "good" virus as a means of transportation to penetrate a system. For instance, a person with malicious intent could get a copy of a "good" virus and modify it to include something malicious. Admittedly, an attacker could trojanize any program, but a "good" virus will provide the attacker with means to transport his malicious code to a virtually unlimited population of computer systems. The potential to be easily modified to carry malicious code is one of the things that makes a virus "bad". 2.2.4. Responsibility Declaring some viruses as "good" and "beneficial" would just provide an excuse to the crowd of irresponsible virus writers to condone their activities and to claim that they are actually doing some kind of "research". In fact, this is already happening - the people mentioned above are often quoting Dr. Fred Cohen's ideas for beneficial viruses as an excuse of what they are doing - often without even bothering to understand what Dr. Cohen is talking about. 2.3. Psychological Reasons The arguments listed in this section are of psychological kind. They are usually a result of some kind of misunderstanding and should be considered an obstacle that has to be "worked around". 2.3.1. Trust Problems The users like to think that they have full control on what is happening in their machine. The computer is a very sophisticated device. Most computer users do not understand very well how it works and what is happening inside. The lack of knowledge and uncertainty creates fear. Only the feeling that the reactions of the machine will be always known, controlled, and predictable could help the users to overcome this fear. However, a computer virus steals the control of the computer from the user. The virus activity ruins the trust that the user has in his/her machine, because it causes the user to lose his/her belief that s/he can control this machine. This may be a source of permanent frustrations. 2.3.2. Negative Common Meaning For most people, the word "computer virus" is already loaded with negative meaning. The media has already widely established the belief that a computer virus is a synonym for a malicious program. In fact, many people call "viruses" many malicious programs that are unable to replicate - like trojan horses, or even bugs in perfectly legitimate software. People will never accept a program that is labelled as a computer virus, even if it claims to do something useful. 3. Some Bad Examples of "Beneficial" Viruses Regardless of all the objections listed in the previous section, several people have asked themselves the question whether a computer virus could be used for something useful, instead of only for destructive purposes. And several people have tried to positively answer this question. Some of them have even implemented their ideas in practice and have been experimenting with them in the real world - unfortunately, without success. In this section we shall present some of the unsuccessful attempts to create a beneficial virus so far, and explain why they have been unsuccessful. 3.1. The "Anti-Virus" Virus Some computer viruses are designed to work not only in a "virgin" environment of infectable programs, but also on systems that include anti-virus software and even other computer viruses. In order to survive successfully in such environments, those viruses contain mechanisms to disable and/or remove the said anti-virus programs and "competitor" viruses. Examples for such viruses in the IBM PC environment are Den_Zuko (removes the Brain virus and replaces it with itself), Yankee_Doodle (the newer versions are able to locate the older ones and "upgrade" the infected files by removing the older version of the virus and replacing it with the newer one), Neuroquila (disables several anti-virus programs), and several other viruses. Several people have had the idea to develop the above behaviour further and to create an "anti-virus" virus - a virus which would be able to locate other (presumably malicious) computer viruses and remove them. Such a self-replicating anti-virus program would have the benefits to spread very fast and update itself automatically. Several viruses have been created as an implementation of the above idea. Some of them locate a few known viruses and remove them from the infected files, others attach themselves to the clean files and issue an error message if another piece of code becomes attached after the virus (assuming that it has to be an unwanted virus), and so on. However, all such pieces of "self-replicating anti-virus software" have been rejected by the users, who have considered the "anti-virus" viruses just as malicious and unwanted as any other real computer virus. In order to understand why, it is enough to realize that the "anti-virus viruses" matches several of the rules that state why a replicating program is considered malicious and/or unwanted. Here is a list of them for this particular idea. First, this idea violates the Control condition. Once the "anti-virus" virus is released, its author has no means to control it. Second, it violates the Recognition condition. A virus that attaches itself to executable files will definitely trigger the anti-virus programs based on monitoring or integrity checking. There is no way for those programs to decide whether they have been triggered by a "beneficial" virus or not. Third, it violates the Resource Wasting condition. Adding an almost identical piece of code to every executable file on the system is definitely a waste - the same purpose can be achieved with a single copy of the code and a single file, containing the necessary data. Fourth, it violates the Bug Containment condition. There is no easy way to locate and update or remove all instances of the virus. Fifth, it causes several compatibility problems, especially to the selfchecking programs, thus violating the Compatibility condition. Sixth, it is not as effective as a non-viral program, thus violating the Effectiveness condition. A virus-specific anti-virus program has to carry thousands of scan strings for the existing malicious viruses - it would be very ineffective to attach a copy of it to every executable file. Even a generic anti-virus (i.e., based on monitoring or integrity checking) would be more effective if it exists only in one example and is executed under the control of the user. Seventh, such a virus modifies other people's programs without their authorization, thus violating the Unauthorized Modification condition. In some cases such viruses ask the user for permission before "protecting" a file by infecting it. However, even in those cases they cause unwanted interruptions, which, as we already demonstrated, in some situations can be fatal. Eight, by modifying other programs such viruses violate the Copyright condition. Ninth, at least with the current implementations of "anti-virus" viruses, it is trivial to modify them to carry destructive code - thus violating the Misuse condition. Tenth, such viruses are already widely being used as examples by the virus writers when they are trying to defend their irresponsible actions and to disguise them as legitimate research - thus the idea violates the responsibility condition too. As we can see from the above, the idea of a beneficial anti-virus virus is "bad" according to almost any of the criteria listed by the users. 3.2. The "File Compressor" Virus This is one of the oldest ideas for "beneficial" viruses. It is first mentioned in Dr. Cohen's original work [Cohen84]. The idea consists of creating a self-replicating program, which will compress the files it infects, before attaching itself to them. Such a program is particularly easy to implement as a shell script for Unix, but it is perfectly doable for the PC too. And it has already been done - there is a family of MS-DOS viruses, called Cruncher, which appends itself to the executable files, then compresses the infected file using Lempel-Zev-Huffman compression, and then prepends a small decompressor which would decompress the file in memory at runtime. Regardless of the supposed benefits, this idea also fails the test of the criteria listed in the previous section. Here is why. First, the idea violates the Control condition. Once released, the author of the virus has no means to controls its spread. In the particular implementation of Cruncher, the virus writer has attempted to introduce some kind of control. The virus asks the user for permission before installing itself in memory, causing unwanted interruptions. It is also possible to tell the virus to install itself without asking any questions - by the means of setting an environment variable. However, there are no means to tell the virus not to install itself and not to ask any questions - which should be the default action. Second, the idea violates the Recognition condition. Several virus scanners detect and recognize Cruncher by name, the process of infecting an executable triggers most monitoring programs, and the infected files are, of course, modified, which triggers most integrity checkers. Third, the idea violates the Resource condition. A copy of the decompressor is present in every infected file, which is obviously unnecessary. Fourth, the idea violates the Bug Containment condition. If bugs are found in the virus, the author has no simple means to distribute the fix and to upgrade all existing copies of the virus. Fifth, the idea violates the Compatibility condition. There are many files which stop working after being compressed. Examples include programs that perform a self-check at runtime, self-modifying programs, programs with internal overlay structure, Windows executables, and so on. Admitedly, those programs stop working even after being compressed with a stand-alone (i.e., non-viral) compression program. However, it is much more difficult to compress them by accident when using such a program - quite unlike the case when the user is running a compression virus. Sixth, the idea violates the Effectiveness condition. It is perfectly possible to use a stand-alone, non-viral program to compress the executable files and prepend a short decompressor to them. This has the added advantage that the code for the compressor does not have to reside in every compressed file, and thus we don't have to worry about its size or speed - because it has to be executed only once. True, the decompressor code still has to be present in each compressed file and many programs will still refuse to work after being compressed. The solution is to use not compression at a file level, but at a disk level. And indeed, compressed file systems are available for many operating environments (DOS, Novell, OS/2, Unix) and they are much more effective than a file-level compressor that spreads like a virus. Seventh, the idea still violates the Copyright condition. It could be argued that it doesn't violate the Data Modification condition, because the user is asked to authorize the infection. We shall accept this, with the remark mentioned above - that it still causes unwanted interruptions. It is also not very trivial to modify the virus in order to make it malicious, so we'll assume that the Misuse condition is not violated too - although no serious attempts are made to ensure that the integrity of the virus has not been compromised. Eighth, the idea violates the responsibility condition. This particular virus - Cruncher - has been written by the same person who has released many other viruses - far from "beneficial" ones - and Cruncher is clearly used as an attempt to condone virus writing and to masquerade it as legitimate "research". 3.3. The "Disk Encryptor" Virus This virus has been published by Mark Ludwig - author of two books and a newsletter on virus writing, and of several real viruses, variants of many of which are spreading in the real world, causing real damage. The idea is to write a boot sector virus, which encrypts the disks it infects with a strong encryption algorithm (IDEA in this particular case) and a user-supplied password, thus ensuring the privacy of the user's data. Unfortunately, this idea is just as flawed as the previous ones. First, it violates the Control condition. True, the virus author has attempted to introduce some means of control. The virus is supposed to ask the user for permission before installing itself in memory and before infecting a disk. However, this still causes unwanted interruptions and reportedly in some cases doesn't work properly - that is, the virus installs itself even if the user has told it not to. Second, it violates the Recognition condition. Several virus-specific scanners recognize this virus either by name or as a variant of Stealth_Boot, which it actually is. Due to the fact that it is a boot sector infector, it is unlikely to trigger the monitoring programs. However, the modification that it causes to the hard disk when infecting it, will trigger most integrity checkers. Those that have the capability to automatically restore the boot sector, thus removing any possibly present virus, will cause the encrypted disk to become inaccessible and therefore cause serious damage. Third, the idea violates the Compatibility condition. A boot sector virus that is permanently resident in memory usually causes problems to Windows f:\12000 essays\technology & computers (295)\Viruses.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ It is morning. You awaken to the sweet smell of flowers and the sound of birds chirping. You turn on your new I B M compatible computer only to find that every bit and byte of information has been erased. A computer virus has struck. Yes, these small bits of computer code have slowly overtaken the world of computing. A computer virus is a small program that attaches itself to disks and computer systems with instructions to do something abnormal. Sometimes the effects of a computer virus can be harmless. Sometimes the effects of a computer virus can be disastrous. But whichever way you look at it they still cause problems. There are many kinds of computer viruses. Three of the most common are the time bomb, the logic bomb and the Trojan horse. The time bomb is a virus triggered by the computers clock reaching a certain date and time (often Friday the thirteenth). The logic bomb is a virus triggered by a certain value appearing a certain part of the computers memory, either relevant to the viruses purposes or at random. The Trojan horse is an innocent seeming program deliberately infects with a virus and circulated publicly. There is a cure for these viruses, though. These "cures" are called vaccines. A vaccine is a program that watches for typical things viruses do, halts them, and warns the computer operator. "Put a kid with the chicken pox together with a bunch of healthy kids and not all of them will get sick." But that is not the case with computer viruses. You see when a computer virus passes on a virus it never fails unless the computer is protected with a vaccine. A typical computer virus spreads faster than the chicken pox too. Now as I said before when a computer virus attempts to infect another computer the attack is not always successful. However that does not mean the infected computer stops trying. An infected computer will pass on the virus every chance it gets. Computer viruses are spread by two methods Floppy disks and modems. A modem is a phone link connected to a bulletin board service (B.B.S.). A B.B.S. is a lot like what it sounds, a bulletin board. If a person calls you and you're not home he leaves a message so that the next time you use the B.B.S. you can see the message. However sometimes a person can leave a virus in a B.B.S. or an unsuspecting computer user whose computer is infected the next time you hook up to the B.B.S. you may get infected. Once a virus reaches a B.B.S. it is virtually unstoppable unless the corporation controlling the B.B.S. uses a vaccine to flush out the virus. So far most virus attacks have been made on large computer networks and apple computers. That doesn't mean that single users or I B M owners are completely safe either. In 1989 there were two million five thousand outbreaks of viruses. The most computer viruses originate from Bulgaria, a country in Europe. As a matter of fact the most deadly computer viruses originate from Bulgaria. One virus called the Dark Avenger was created in Bulgaria then sent to the United States of America and it started destroying military secrets. The military knew that it had to be designed alone because if Bulgarian government made it could just turn around like a boomerang and attack them. In Bulgaria there is no real law against computer crime. You could do something with a computer that could get you the death penalty here and get off with a slap on the wrist there. One of the most famous viruses of all time was the Michelangelo virus. This virus was created by a mad man who wanted everybody to remember the famous painter. This virus was a time bomb virus set to go off on the artist's birthday march sixth 1990. This virus affected more computers than any other virus. When this virus exploded it erased every bit of information with it. The average price for the Michelangelo virus vaccine is about 160$. To sum up my whole report I would think Clifford Stoll said it best when he said "a safe computer is one that isn't connected to the outside world." f:\12000 essays\technology & computers (295)\VR 2.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ An Insight Into Virtual Reality Virtual Reality is a creation of a highly interactive computer based multimedia environment in which the user becomes a participant with the computer in a "virtually real" world We are living in an era characterized by 3D virtual systems created by computer graphics. In the concept called Virtual Reality (VR), the virtual reality engineer is combining computer, video, image-processing, and sensor technologies so that a human can enter into and react with spaces generated by computer graphics. In 1969-70, a MIT scientist went to the University of Utah, where he began to work with vector generated graphics. He built a see-through helmet that used television screens and half-silvered mirrors, so that the environment was visible through the TV displays. It was not yet designed to provide a surrounding environment. It was not until the mid '80's that virtual reality systems were becoming more defined. The AMES contract started in 1985, came up with the first glove in February 1986. The glove is made of thin Lycra and is fitted with 15 sensors that monitor finger flexion, extension, hand position and orientation. Connected to a computer through fiber optic cables. Sensor inputs enable the computer to generate an on screen image of the hand that follows the operator's hand movements. The glove also has miniature vibrators in the finger tips to provide feedback to the operator from grasped virtual objects. Therefore, driven by the proper software, the system allows the operator to interact by grabbing and moving a virtual object within a simulated room, while experiencing the "feel" of the object. The virtual reality line includes the Datasuit and the Eyephone. The Datasuit is an instrumented full-body garment that enables full-body interaction with a computer constructed virtual world. In one use, this product is worn by film actors to give realistic movement to animated characters in computer generated special effects. The Eyephone is a head mounted stereo display that shows a computer made virtual world in full color and 3D. The Eyephone technology is based on an experimental Virtual Interface Environment Workstation (VIEW) design. VIEW is a head-mounted stereoscopic display system with two 3.9 inch television screens, one for each eye. The display can be a computer generated scene or a real environment sent by remote video cameras. Sound effects delivered to the headset increase the realism. It was intended to use the glove and software for such ideas as a surgical simulation, or "3D virtual surgery" for medical students. In the summer of 1991, US trainee surgeons were able to practice leg operations without having to cut anything solid. NASA Scientists have developed a three-dimensional computer simulation of a human leg which surgeons can operate on by entering the computer world of virtual reality. Surgeons use the glove and Eyephone technology to create the illusion that they are operating on a leg. Other virtual reality systems such as the Autodesk and the CAVE have also come up with techniques to penetrate a virtual world. The Autodesk uses a simple monitor and is the most basic visual example for virtual reality. An example where this could be used is while exercising. For example, Autodesk may be connected to an exercise bike, you can then look around a graphic world as you pedal through it. If you pedal fast enough, your bike takes off and flies. The CAVE is a new virtual reality interface that engulfs the individual into a room whose walls, ceiling, and floor surround the viewer with virtual space. The illusion is so powerful you won't be able to tell what's real and what's not. Computer engineers seem fascinated by virtual reality because you can not only program a world, but in a sense, inhabit it. Mythic space surrounds the cyborg, embracing him/her with images that seem real but are not. The sole purpose of cyberspace virtual reality technology is to trick the human senses, to help people believe and uphold an illusion. Virtual reality engineers are space makers, to a certain degree they create space for people to play around in. A space maker sets up a world for an audience to act directly within, and not just so the audience can imagine they are experiencing a reality, but so they can experience it directly. "The film maker says, 'Look, I'll show you.' The space maker says, 'Here, I'll help you discover.' However, what will the space maker help us discover?" "Are virtual reality systems going to serve as supplements to our lives, or will individuals so miserable in their daily existence find an obsessive refuge in a preferred cyberspace? What is going to be included, deleted, reformed, and revised? Will virtual reality systems be used as a means of breaking down cultural, racial, and gender barriers between individuals and thus nurture human values?" During this century, responsive technologies are moving even closer to us, becoming the standard interface through which we gain much of our experience. The ultimate result of living in a cybernetic world may create an artificial global city. Instead of a global village, virtual reality may create a global city, the distinction being that the city contains enough people for groups to form affiliations, in which individuals from different cultures meet together in the same space of virtual reality. The city might be laid out according to a three dimensional environment that dictates the way people living in different countries may come to communicate and understand other cultures. A special camera, possibly consisting of many video cameras, would capture and transmit every view of the remote locations. Viewers would receive instant feedback as they turn their heads. Any number of people could be looking through the same camera system. Although the example described here will probably take many years to develop, its early evolution has been under way for some time, with the steady march of technology moving from accessing information toward providing experience. As well, it is probably still childish to imagine the adoption of virtual reality systems on a massive scale because the starting price to own one costs about $300,000. Virtual Reality is now available in games and movies. An example of a virtual reality game is Escape From Castle Wolfenstein. In it, you are looking through the eyes of an escaped POW from a Nazi death camp. You must walk around in a maze of dungeons were you will eventually fight Hitler. One example of a virtual reality movie is Stephen King's The Lawnmower Man. It is about a mentally retarded man that uses virtual reality as a means of overcoming his handicap and becoming smarter. He eventually becomes crazy from his quest for power and goes into a computer. From there he is able to control most of the world's computers. This movie ends with us wondering if he will succeed in world domination. From all of this we have learned that virtual reality is already playing an important part in our world. Eventually, it will let us be able to date, live in other parts of the world without leaving the comfort of our own living room, and more. Even though we are quickly becoming a product of the world of virtual reality, we must not lose touch with the world of reality. For reality is the most important part of our lives. f:\12000 essays\technology & computers (295)\VR.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Virtual Reality - What it is and How it Works Imagine being able to point into the sky and fly. Or perhaps walk through space and connect molecules together. These are some of the dreams that have come with the invention of virtual reality. With the introduction of computers, numerous applications have been enhanced or created. The newest technology that is being tapped is that of artificial reality, or "virtual reality" (VR). When Morton Heilig first got a patent for his "Sensorama Simulator" in 1962, he had no idea that 30 years later people would still be trying to simulate reality and that they would be doing it so effectively. Jaron Lanier first coined the phrase "virtual reality" around 1989, and it has stuck ever since. Unfortunately, this catchy name has caused people to dream up incredible uses for this technology including using it as a sort of drug. This became evident when, among other people, Timothy Leary became interested in VR. This has also worried some of the researchers who are trying to create very real applications for medical, space, physical, chemical, and entertainment uses among other things. In order to create this alternate reality, however, you need to find ways to create the illusion of reality with a piece of machinery known as the computer. This is done with several computer-user interfaces used to simulate the senses. Among these, are stereoscopic glasses to make the simulated world look real, a 3D auditory display to give depth to sound, sensor lined gloves to simulate tactile feedback, and head-trackers to follow the orientation of the head. Since the technology is fairly young, these interfaces have not been perfected, making for a somewhat cartoonish simulated reality. Stereoscopic vision is probably the most important feature of VR because in real life, people rely mainly on vision to get places and do things. The eyes are approximately 6.5 centimeters apart, and allow you to have a full-colour, three-dimensional view of the world. Stereoscopy, in itself, is not a very new idea, but the new twist is trying to generate completely new images in real- time. In 1933, Sir Charles Wheatstone invented the first stereoscope with the same basic principle being used in today's head-mounted displays. Presenting different views to each eye gives the illusion of three dimensions. The glasses that are used today work by using what is called an "electronic shutter". The lenses of the glasses interleave? f:\12000 essays\technology & computers (295)\Was the Grand Prix Benificial for Melbourne.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Issues Part -B- Was the Grand Prix, promoted as "The Great Race" which was held at Albert Park beneficial for Melbourne, or was it just a huge waste of taxpayers money? The race was televised to 650 million people in 130 different countries is expected to pump $50 million into the Victorian economy every year and boost tourism enormously. I along with the owners of seventy-two percent of hotels, motels, restaurants and other entertainment complexes agree that Albert Park having the Grand Prix will have a positive impact on business. Infact it pumped $10 - $15 million into local business. This will mean these businesses did put on more part time staff who will be gaining valuable work experience and there will also be a flow on effect to suppliers of these industries. Fifty-nine percent of interstate visitors and forty five percent of overseas visitors would not have come to Adelaide in a two year period because of the Grand Prix if not for the race. By Albert Park getting the Grand Prix created between 1000-1500 new jobs. The Grand Prix will promote Victoria on an international scale with international press, television and media caring out a world wide coverage of this event. This could convince people to come and visit Melbourne and would also be a major tourism boost. Approximately $23.8 million has been spent overhauling the park and upgrading the Lake side track. They built better fences and barricades to help protect spectators in case of a crash, and the track is said to be the safest and finest in the world, creating a benchmark for Albert Park. Temporary seating will cater for 150,000 people, and there was approximately an attendance of 400,000 over the four days. 9,000 part-time jobs and 1,000 full-time jobs were created over the weekend. The "greenies" are still trying to stop the race at Albert Park. First it was "Save The Park" and now it's "Stop The Grand Prix." At first they protested about the cutting down of hundreds of trees to make way for the track. But this has been overcome by the replanting of 5000 new trees which would cover 16 football ovals. This is almost double the amount of trees that were there previously. They don't care about the huge impact that the race had on Melbourne, instead they unsuccessfully protest against it and by doing so it has cost the Victorian taxpayers $1.3 million. But the track has already been built and the first race held, so there is no chance of it being removed and the park could never be transformed back to its original state. Although there was approximately 5,000 tons of rubbish, it has all been cleaned up and in the process, a number of people have gained temporary employment. The residents of Albert Park that disagree with the idea for the Grand Prix. They say it would spoil the "Parks Effect" and the fumes will kill all plant and animal life there previously. They say their houses will be engulfed with fumes and that it would not be very safe for their young children. They do not feel safe with their houses so close to the track. But on the other hand because their houses are so close to the track the value of their homes will rise. Because the race was held so recently it is hard to judge how big an impact it had on the economy. Probably at the same time next year would be a better time to judge the impact it had. But already we can see the benefits, Albert Park is now known on a international scale, many new jobs have been created, local and big business' have also benefited due to tourism. So it is quite obvious that the race overall was a success with no thanks to the protesters. f:\12000 essays\technology & computers (295)\Welcome to the Internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Good evening I would like to welcome you all here this evening to our symposium on "Then & Now the Evolution of a society" Things Have Changed dramatically since the 1920's This Panel of researchers that you see before you are here to talk to you about the events and issues from the past 75 years that have led up society as we know it today. starting from left to right I would like to Introduce this panel to you first we have Lori who is an expert in the political arena and she will touch on a few issues on how politics itself has not really changed but how the politicians get there message across has Next to her we have Yolanda who is an expert on African American leaders and the profound effect they have made in the Last 75 years. Next in the Panel is Peggy she is here to talk about the role of women in society and the role that they have played in the formation of the world as we know it today. ***we all would not be here today if education didn't play a role in the evolution of society and here to talk to you about the evolution of education is Jaime. Later on this evening I will be speaking about the role that technology has dramatically impacted the world as we know it and my Name is Matthew And finally our last researcher is Christine and she will discuss the influence of music on this evolution of society from then until now an evolution of society. I will bring each of the speakers up to make a brief statement of there research and then after they have finished we will open the floor for questions Again Welcome here this evening I would like now to bring up our first expert Lori f:\12000 essays\technology & computers (295)\What is ISDN.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ What is ISDN? ISDN, which stands for integrated services digital network, is a system of digitizing phone networks which has been in the works for over a decade. This system allows audio, video, and text data to be transmitted simultaneously across the world using end-to-end digital connectivity. The original telephone system used analog signals to transmit a signal across telephone wires. The voice was carried by modulating an electric current with a waveform from a microphone. The receiving end would then vibrate a speaker coil for the sound to travel back to the ear through the air. Most telephones today still use this method. Computers, however, are digital machines. All information stored on them is represented by a bit, representing a zero or a one. Multiple bits are used to represent characters, which then can represent words, numbers, programs, etc. The analog signals are just varying voltages sent across the wires over time. Digital signals are represented and transmitted by pulses with a limited number of discrete voltage levels. [Hopkins] The modem was certainly a big breakthrough in computer technology. It allowed computers to communicate with each other by converting their digital communications into an analog format to travel through the public phone network. However, there is a limit to the amount of information that a common analog telephone line can hold. Currently, it is about 28.8 kbit/s. [Hopkins] ISDN allows multiple digital channels to be operated simultaneously through the same regular phone jack in a home or office. The change comes about when the telephone company's switches are upgraded to handle digital calls. Therefore, the same wiring can be used, but a different signal is transmitted across the line. [Hopkins] Previously, it was necessary to have a phone line for each device you wished to use simultaneously. For example, one line each for the phone, fax, computer, and live video conference. Transferring a file to someone while talking on the phone, and seeing their live picture on a video screen would require several expensive phone lines. [Griffiths] Using multiplexing (a method of combining separate data signals together on one channel such that they may be decoded again at the destination), it is possible to combine many different digital data sources and have the information routed to the proper destination. Since the line is digital, it is easier to keep the noise and interference out while combining these signals. [Griffiths] ISDN technically refers to a specific set of services provided through a limited and standardized set of interfaces. This architecture provides a number of integrated services currently provided by separate networks. ISDN adds capabilities not found in standard phone service. The main feature is that instead of the phone company sending a ring voltage signal to ring the bell in your phone, it sends a digital package that tells who is calling (if available), what type of call it is (data/voice), and what number was dialed (if multiple numbers are used for a single line). ISDN phone equipment is then capable of making intelligent decisions on how to answer the call. In the case of a data call, baud rate and protocol information is also sent, making the connection instantaneous. [Griffiths] ISDN Concepts: With ISDN, voice and data are carried by bearer channels (B channels) occupying a bandwidth of 64 kbit/s each. A delta channel (D channel) handles signalling at 16 kbit/s or 64 kbit/s. H channels are provided for user information at higher bit rates. [Stallings] There are two types of ISDN service: Basic Rate ISDN (BRI) and Primary Rate ISDN (PRI). BRI: consists of two 64 kbit/s B channels and one 16 kbit/s D channel for a total of 144 kbit/s. The basic service is intended to meet the needs of most individual users. PRI: intended for users with greater capacity requirements. Typically the channel structure is 23 B channels plus one 64 kbit/s D channel for a total of 1.544 Mbit/s. H channels can also be implemented: H0=384 kbit/s, H11=1536 kbit/s, H12=1920 kbit/s. [Stallings] In this paper, I will concentrate on defining the specifics of Basic Rate ISDN for local loop transmission. I will provide an in depth view of ISDN as it relates to layer 1 to 3 of the seven layer OSI model. I will also provide the specification for communication at the S/T customer interface. Basic Rate ISDN: Basic Rate Interface (BRI) - The BRI is the fundamental building block of an ISDN network. It is composed of a single 16 kbit/s "D-channel" which is used for call setup and control and two 64 kbit/s "B-channels". The B-channels can be used to carry voice and both circuit mode and packet mode data traffic. The D-channel may also be used to carry X.25 packet traffic if the network supports that option. [Griffiths] Basic Rate Interface D Channel - In the analog world, a telephone call is controlled in-band. Tones and voltages are sent across lines for signalling conditions. ISDN does away with this. The D channel becomes the vehicle for signalling. This signalling is called common channel since a separate channel for signalling is used by two or more bearer channels. [Hopkins] User - Network protocols define how users interact with ISDN networks. Between the user equipment and network equipment is a set of defined interfaces. The U interface is between the central office and the customer premise. This interface carries information on the twisted pair of wires between the customer and the central office. At the S/T interface located at the customer location, two pairs of wires (one for transmitting, one for receiving) are used. The intermediate device between the U and the S/T interface is known as an NT1. The NT1 is a hybrid that converts from four wire to two wire and also transforms the 2B+D signal into a different bit stream format. [Griffiths] ISDN and the OSI Model - The OSI (Open Systems Interconnect) seven layer protocol was developed to promote interoperability in the data world. ISDN, which followed OSI, was designed to be a network technology inhabiting the lower three layers of the OSI model. Consequently, an OSI end system that implements an OSI seven layer stack can contain ISDN at the lower layers. Also, services such as TCP/IP (Internet Transmission Control Protocol) can use the ISDN network. [Griffiths] Layer 1 of User-Network Interface: Layer 1 protocols provide the details that describe how the signals (electrical or optical) are encoded onto the physical medium. These protocols describe how the user data and signalling bits are transformed into line signals, then back again into user data bits. The ISDN layer 1 protocol supports the functions outlined below. [ITU-T, I.430] ( B Channel Transmission ( D Channel Transmission ( D Channel Access Procedure B Channel Transmission - Layer 1 must support for each direction of transmission, two independent 64 kbit/s B channels. The B channels contain user data which is switched by the network to provide the end-to-end transmission source. There is no error correction provided by the network on these channels. [ITU-T, I.430] D Channel Transmission - Layer 1 must support for each direction of transmission, a 16 kbit/s channel for the signalling information. In some networks user packet data may also be supported on the D channel. [ITU-T, I.430] D Channel Access Procedure - This procedure ensures that in the case of two or more terminals, on a point to multipoint configuration, attempting to access the D Channel simultaneously, one terminal will always successfully complete the transmission of information. [ITU-T, I.430] Binary Organization of Layer 1 frame - The structures of Layer 1 frames across the interface are different in each direction of transmission. Both structures are shown in figure 1 below. [Griffiths] A frame is 48 bits long and lasts 250(s. The bit rate is therefore 192 kbit/s and each bit is approximately 5.2(s long. Figure 1 also shows that there is a 2-bit offset between transmit and receive frames. This is the delay between frame start at the receiver of a terminal and the frame start of the transmitted signal. [Griffiths] Figure 1 also illustrates that the line coding used is AMI (Alternate Marks Inversion); a logical 1 is transmitted as zero volts and a logical 0 as a positive or negative pulse. Note that this convention is the inverse of that used on line transmission systems. The nominal pulse amplitude is 750mV. [Griffiths] A frame contains several L bits. These are balance bits to prevent a build up of DC on the line. For the direction TE to NT, where each B-channel may come from a different terminal, each terminal's output contains an L bit to form a balanced block. [ITU-T, I.430] Examining the frame in the NT to TE direction, the first bits of the frame are the F/L pair, which is used in the frame alignment procedure. The start of a new frame is signalled by the F/L pair violating the AMI rules. Once a violation has occurred there must be a second violation to restore correct polarity before the next frame. This takes place with the first mark after the F/L pair. The FA bit ensures this second violation occurs should there not be a mark in the B1, B2, D, E, or A channels. The E channel is an echo channel in which D-channel bits arriving at the NT are echoed back to the TEs. There is a 10 bit offset between the D channel leaving a terminal, traveling to the NT and being echoed back in the E channel. [ITU-T, I.430] The A bit is used in the activation procedure to indicate to the terminals that the system is in synchronization. Next is a byte of the B2 channel, a bit of the E channel and a bit of the D channel, followed by an M bit. This is used for multiframing. The M bit identifies some FA bits which can be stolen to provide a management channel. [ITU-T, I.430] The B1, B2, D, and E channels are then repeated along with the S bit which is a spare bit. [ITU-T, I.430] Layer 1 D Channel Contention Procedure - This procedure ensures that, even in the case of two or more terminals attempting to access the D channel simultaneously, one terminal will always successfully complete the transmission of information by first gaining control of the D channel and then retransmitting its information. The procedure relies on the fact that the information to be transmitted consists of layer 2 frames delimited by flags consisting of the binary pattern 01111110. Layer 2 applies a zero bit insertion algorithm to prevent flag imitation by a layer 2 frame. The interframe time fill consists of binary 1s which are represented by zero volts. The zero volt line signal is generated by the TE transmitter going high impedance. This means a binary 0 from a parallel terminal will overwrite as binary 1. Detection of collision is done by the terminal monitoring the E channel (D channel echoed from the NT). [ITU-T, I.430] To access the D channel a terminal looks for the interframe time fill by counting the number of consecutive binary 1s in the D channel. Should a binary 0 be received the count is reset. When the number of consecutive 1s reaches a predetermined value (which is greater than the number of consecutive 1s possible in a frame because of the zero bit insertion algorithm) the counter is reset and the terminal may access the D channel. When a terminal has just completed transmitting a frame the value of the count needed to be reached before another frame may be transmitted is incremented by 1. This gives other terminals a chance to access the channel. Hence an access and priority mechanism is established. [ITU-T, I.430] There is still the possibility of collision between two terminals of the same priority. This is detected and resolved by each terminal comparing its last transmitted bit with the next E bit. If they are the same the terminal continues to transmit. If, however, they are different the terminal detecting the difference ceases transmission immediately and returns to the D channel monitoring state leaving the other terminal to continue transmission. [ITU-T, I.430] Layer 1 Activation/Deactivation Procedure - This procedure permits activation of the interface from both the terminal and network side, but deactivation only from the network side. This is because of the multi-terminal capability of the interface. Activation and deactivation information is conveyed across the interface by the use of line signals called 'Info signals'. [ITU-T, I.430] Info 0 is the absence of any line signal; this is the idle state with neither terminals nor the NT working. [ITU-T, I.430] Info 1 is flags transmitted from a terminal to the NT to request activation. Note this signal is not synchronized to the network. [ITU-T, I.430] Info 2 is transmitted from the NT to the TEs to request their activation or to indicate that the NT has activated as a response to receiving an Info 1. An Info 2 consists of Layer 1 frames with a high density of binary zeros in the data channels which permits fast synchronization of the terminals. [ITU-T, I.430] Info 2 and Info 4 are frames containing operational data transmitted from the TE and NT respectively.[ITU-T, I.430] The principal activation sequence is commenced when a terminal transmits an Info 1. The NT activates the local transmission system which indicates to the exchange that the customer is activating. The NT1 responds to the terminals with an Info 2 to which the TEs synchronize. The TEs respond with an Info 3 containing operational data and the NT is then in a position to send Info 4 frames. Note that all terminals activate in parallel; it is not possible to have just one terminal activated in a multi-terminal configuration. The network activates the bus by the exchange activating the local network transmission system. Deactivation occurs when the exchange deactivates the local network transmission system. [ITU-T, I.430] Layer 2 of User-Network Interface: The Layer 2 recommendation describes the high level data link (HDLC) procedures commonly referred to as the Link Access Procedure for a D channel or LAP D. The objective of Layer 2 is to provide a secure, error-free connection between two endpoints connected by a physical medium. Layer 3 call control information is carried in the information elements of Layer 2 frames and it must be delivered in sequence and without error. Layer 2 also has the responsibility for detecting and retransmitting lost frames. LAP D was based originally on LAP B of the X.25 Layer 2 recommendation. However, certain features of LAP D give it significant advantages. The most striking difference is the possibility of frame multiplexing by having separate addresses at Layer 2 allowing many LAPs to exist on the same physical connection. It is this feature that allows up to eight terminals to share the signalling channel in the passive bus arrangement. [ITU-T, Q.920] Each Layer 2 connection is a separate LAP and the termination points for the LAPs are within the terminals at one end and at the periphery of the exchange at the other. Layer 2 operates as a series of frame exchanges between the two communicating, or peer entities. The frames consist of a sequence of eight bit elements and the elements in the sequence define their meaning as shown in Figure 2 below. [ITU-T, Q.920] A fixed pattern called a flag is used to indicate both the beginning and end of a frame. Two octets are needed for the Layer 2 address and carry a service identifier (SAPI), a terminal identifier (TEI) and a command /response bit. The control field is one or two octets depending on the frame type and carries information that identifies the frame and the Layer 2 sequence numbers used for link control. The information element is only present in frames that carry Layer 3 information and the Frame Check Sequence (FCS) is used for error detection. A detailed breakdown of the individual elements is given in Figures 3 and 4 below. [ITU-T, Q.920] What cannot be shown in the diagrams is the procedure to avoid imitation of the flag by the data octets. This is achieved by examining the serial stream between flags and inserting an extra 0 after any run of five 1 bits. The receiving Layer 2 entity discards a 0 bit if it is preceded by five 1's. [ITU-T, Q.920] Layer 2 Addressing - Layer 2 multiplexing is achieved by employing a separate Layer 2 address for each LAP in the system. To carry the LAP identity the address is two octets long and identifies the intended receiver of a command frame and the transmitter of a response frame. The address has only local significance and is known only to the two end-points using the LAP. No use can be made of the address by the network for routing purposes and no information about its value will be held outside the Layer 2 entity. [ITU-T, Q.921] The Layer 2 address is constructed as shown in Figure 3. The Service Access Identifier (SAPI) is used to identify the service intended for the signalling frame. An extension of the use of the D channel is to use it for access to a packet service as well as for signalling. Consider the case of digital telephones sharing a passive bus with packet terminals. The two terminal types will be accessing different services and possibly different networks. It is possible to identify the service being invoked by using a different SAPI for each service. This gives the network the option of handling the signalling associated with different services in separate modules. In a multi-network ISDN it allows Layer 2 routing to the appropriate network. The value of the SAPI is fixed for a given service. [ITU-T, Q.921] The Terminal Endpoint Identifier (TEI) takes a range of values that are associated with terminals on the customer's line. In the simplest case each terminal will have a single unique TEI value. The combination of TEI and SAPI identify the LAP and provide a unique Layer 2 address. A terminal will use its Layer 2 address in all transmitted frames and only frames received carrying the correct address will be processed. [ITU-T, Q.921] In practice a frame originating from telephony call control has a SAPI that identifies the frame as 'telephony' and all telephone equipment examine this frame. Only the terminal whose TEI agrees with that carried by the frame will pass it to the Layer 2 and Layer 3 entities for processing. There is also a SAPI identified in standards for user data packet communication. [ITU-T, Q.921] Since it is important that no two TEIs are the same, the network has a special TEI management entity which allocates TEI on request and ensures their correct use. The values that TEIs can take fall into the ranges: 0-63 Non-Automatic Assignment TEIs 64-126 Automatic Assignment TEIs 127 Global TEI [ITU-T, Q.921] Non-Automatic TEIs are selected by the user; their allocation is the responsibility of the user. Automatic TEIs are selected by the network; their allocation is the responsibility of the network. The global TEI is permanently allocated and is referred to as the broadcast TEI. [ITU-T, Q.921] Terminals which use TEIs in the range of 0-63 need not negotiate with the network before establishing a Layer 2 connection. Terminals which use TEIs in the range 64-126 cannot establish a Layer 2 connection until they have requested a TEI from the network. In this case it is the responsibility of the network not to allocate the same TEI more than once at any given time. The global TEI is used to broadcast information to all terminals within a given SAPI; for example a broadcast message to all telephones, offering an incoming telephone call. [ITU-T, Q.921] Layer 2 Operation - The function of Layer 2 is to deliver Layer 3 frames, across a Layer 1 interface, error free and in sequence. It is necessary for a Layer 2 entity to interface both Layer 1 and Layer 3. To highlight the operation of Layer 2 we will consider the operation of a terminal as it attempts to signal with the network. [ITU-T, Q.921] It is the action to establish a call that causes protocol exchange between terminal and network. If there has been no previous communication it is necessary to activate the interface in a controlled way. A request for service from the customer results in Layer 3 requesting a service from Layer 2. Layer 2 cannot offer a service unless Layer 1 is available and so a request is made to Layer 1. Layer 1 then initiates its start-up procedure and the physical link becomes available for Layer 2 frames. Before Layer 2 is ready to offer its services to Layer 3 it must initiate the Layer 2 start-up procedure known as 'establishing a LAP'. [ITU-T, Q.921] LAP establishment is achieved by the exchange of Layer 2 frames between the Layer 2 handler in the terminal and the corresponding Layer 2 handler in the network. The purpose of this exchange is to align the state variables that will be used to ensure the correct sequencing of information frames. Before the LAP has been established the only frames that may be transmitted are unnumbered frames. The establishment procedure requires one end-point to transmit a Set Asynchronous Balanced Mode Extended (SABME) and the far end to acknowledge it with an Unnumbered Acknowledgment (UA). [ITU-T, Q.921] Once the LAP is established Layer 2 is able to carry the Layer 3 information and is said to be the 'multiple frame established state'. In this state Layer 2 operates its frame protection mechanisms. Figure 5 below shows a normal Layer 2 frame exchange. [ITU-T, Q.921] Once established the LAP operates an acknowledged service in which every information frame must be responded to by the peer entity. The most basic response is the Receiver Ready (RR) response frame. Figure 5 shows the LAP establishment and the subsequent I frame RR exchanges. The number of I frames allowed to be outstanding without an acknowledgment is defined as the window size and can vary between 1 and 127. For telephony signalling applications the window size is 1 and after transmitting an I frame the Layer 2 entity will await a response from the corresponding peer entity before attempting to transmit the next I frame. Providing there are no errors all that would be observed on the bus would be the exchange of I frames and RR responses. However Layer 2 is able to maintain the correct flow of information in the face of many different error types. [ITU-T, Q.921] Layer 2 Error Control - It is unlikely that a frame will disappear completely but it is possible for frames to be corrupted by noise at Layer 1. Corrupted frames will be received with invalid Frame Check Sequence (FCS) values and consequently discarded. [ITU-T, Q.920] The frame check sequence is generated by dividing the bit sequence starting at the address up to (but not including) the start of the frame check sequence by the generator polynomial X16 + X12 + X5 + 1. In practical terms this is done by a shift register as shown in figure 6. All registers are preset to 1 initially. At the end of the protected bits the shift register contains the remainder from the division. The 1's complement of the remainder is the FCS. At the receiver the same process is gone through , but this time the FCS is included in the division process. In the absence of transmission errors the remainder should always be 1101 0000 1111. [ITU-T, Q.920] The method for recovering from a lost frame is based on the expiration of a timer. A timer is started every time a command frame is transmitted and is stopped when the appropriate response is received. This single timer is thus able to protect both the command and response as the loss of either will cause it to expire. [ITU-T, Q.920] When the timer expires it is not possible to tell which of the two frames has been lost and the action taken is the same in both cases. Upon the timer expiring, Layer 2 transmits a command with the poll bit set. This frame forces the peer to transmit a response that indicates the value held by the state variables. It is possible to tell from the value carried by the response frame whether or not the original frame was received. If the first frame was received, the solicited response frame will be the same as the lost response frame and is an acceptable acknowledgment. If however the original frame was lost, the solicited response will not be an appropriate acknowledgment and the Layer 2 entity will know that a retransmission is required. It is possible for the same frame to be lost more than once and Layer 2 will restransmit the frame three times. If after three transmissions of the frame the correct response has not been received , Layer 2 will assume that the connection has failed and will attempt to re-establish the LAP. [ITU-T, Q.921] Another possible protocol error is the arrival of an I frame with an invalid send sequence number N(S). This error is more likely to occur when the LAP is operating with a window size greater than one. If, for example, the third frame in the sequence of four is lost the receiving Layer 2 entity will know that a frame has been lost from the discontinuity in the sequence numbers. The Layer 2 must not acknowledge the fourth frame as this will imply acknowledgment of the lost third frame. The correction operation is to send a Reject (REJ) frame with the receive sequence number N(R) equal to N(S) + 1 where N(S) is the send variable of the last correctly received I frame, in this case I frame 2. This does two things; first it acknowledges all the outstanding I frames up to and including the second I frame, and secondly it causes the sending end to retransmit all outstanding I frames starting with the lost third frame. [ITU-T, Q.920] The receipt of a frame with an out of sequence, or invalid, N(R) does not indicate a frame loss and cannot be corrected by retransmissions. It is necessary in this case to re-establish the LAP to realign the state variables at each end of the link. [ITU-T, Q.920] The Receiver Not Ready (RNR) frame is used to inhibit the peer Layer 2 from transmitting I frames. The reasons for wanting to do this are not detailed in the specification but it is possible to imagine a situation where Layer 3 is only one of many functions to be serviced by a microprocessor and a job of higher priority requires that no Layer 3 processing is performed. [ITU-T, Q.920] Another frame specified in Layer 2 is the FRaMe Reject frame (FRMR). This frame may be received by a Layer 2 entity but may not be transmitted. It is included in the recommendation to preserve alignment between LAP D and LAP B. After the detection of a frame reject condition the data link is reset. [ITU-T, Q.920] Disconnecting the LAP - After Layer 3 has released the call it informs Layer 2 that it no longer requires a service. Layer 2 then performs its own disconnection procedures so that ultimately Layer 1 can disconnect and the transmission systems associated with the local line and the customer's bus can be deactivated. [ITU-T, Q.921] Layer 2 disconnection is achieved when the frames disconnect (DISC) and UA are exchanged between peers. At this point the LAP can no longer support the exchange of I frames and supervisory frames. [ITU-T, Q.921] The last frame type to be considered is the Disconnect Mode (DM) frame. This frame is an unnumbered acknowledgment and may be used in the same way as a UA frame. It is used as a response to a SABME if the Layer 2 entity is unable to establish the LAP, and a response to a DISC if the Layer 2 entity has already disconnected the LAP. [ITU-T, Q.921] TEI Allocation - Because each terminal must operate using a unique TEI, procedures have been defined in a Layer 2 management entity to control their use. The TEI manager has the ability to allocate, remove, check, and verify TEIs that are in use on the customer's bus. As the management entity is a separate service point all messages associated with TEI management are transmitted with a management SAPI. [ITU-T, Q.921] TEI management procedures must operate regardless of the Layer 2 state and so the unnumbered information frame (UI) is used for all management messages. The UI frames have no Layer 2 response and protection of the frame content is achieved by multiple transmissions of the frame. In order to communicate with terminals which have not yet been allocated TEIs a global TEI is used. All management frames are transmitted on a broadcast TEI which is associated with a LAP that is always available. All terminals can transmit and receive on the broadcast TEI as well as their own unique TEI. All terminals on the customer's line will process all management frames. To ensure that only one terminal acts upon a frame a unique reference number is passed between the terminal and the network. This reference number is contained within an element in the UI frame and is either a number randomly generated by the terminal, or 0 is the TEI of the terminal, depending on the exact situation. Figure 7 below shows the frame exchange required for a terminal to be allocated a TEI and establish its data link connection. [ITU-T, Q.921] Layer 3 of User-Network Interface: This layer effects the establishment and control of connections. It is carried in Layer 2 frames as can be seen in figure 8. [ITU-T, Q.930] The first octet contains a protocol discriminator which gives the D channel the capability of simultaneously supporting additional communications protocols in the future. The bits shown in figure 8 are the standard for user-network call control messages. [ITU-T, Q.930] The call reference value in the third octet is used to identify the call with which a particular message is associated. Thus a call can be identified independently of the communications channel on which it is supported. The message type coded in the fourth octet describes the intention of the message (e.g. a SETUP message to request call establishment). These are listed in Table 1 at the end of this paper. A number of other information elements may be included following the message type code in the fourth octet. The exact contents of a message are dependent on the message type. [ITU-T, Q.931] The message sequence for call establishment is shown in figure 9. In order to make an outgoing call request, a user must send all of the necessary call information to the network. Furthermore, the user must specify the particular bearer service required for the call (i.e. Speech, 64 kbit/s/s unrestricted, or 3.1 kHz Audio) and any terminal compatibility information which must be checked at the destination. [ITU-T, Q.931] The initial outgoing call request may be made in an en bloc or overlap manner. Figure 9 illustrates the call establishment procedures. If overlap sending is used then the SETUP message must contain the bearer service request but the facility requests and called party number information may be segmented and conveyed in a sequence of INFORMATION messages as shown. Furthermore if a speech bearer service is requested and no call information is contained in the SETUP message, then the network will return in-band dial tone to the user until the first INFORMATION message has been received. [ITU-T, Q.931] Following the receipt of sufficient information for call establishment , the network returns a call PROCEEDING f:\12000 essays\technology & computers (295)\What really is a hacker.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Dan Parks Julie Jackson - Instructor CIS 101 11-18-96 What really is a hacker? There is a common misconception among the general public about what constitutes a hacker and what hacking is. Hacking is defined as "gaining illegal entry into a computer system, with the intent to alter, steal, or destroy data." The validity of this definition is still being debated, but most individuals would describe hacking as gaining access to information which should be free to all. Hackers generally follow some basic principles, and hold these principles as the "ethical code." There are also a few basic "Hacker rules" that are usually viewed by all in this unique group. The principles that hackers abide by are characteristic of most people who consider the themselves to be a hacker. The first, which is universally agreed upon is that access to computers should be free and unlimited. This is not meant to be a invasion of privacy issue, but rather free use of all computers and what they have to offer. They also believe that anyone should be able to use all of a computers resource with no restrictions as to what may be accessed or viewed. This belief is controversial, it not only could infringe upon people's right to privacy, but give up trade secrets as well. A deep mistrust of authority, some hackers consider authority to be a constriction force. Not all hackers believe in this ethic, but generally authority represents something that would keep people from being able to have full access and/or free information. Along with the "ethical code" of hackers there are a few basic "hacking rules" that are followed, sometimes even more closely then there own code. Keep a low profile, no one ever suspects the quite guy in the corner. If suspected, keep a lower profile. If accused, simply ignore. If caught, plead the 5th. Hackers consider a computer to be a tool and to limit its accessibility is wrong. Hacking would cease if there was no barrier as to what information could be accessed freely. By limiting the information which may be attained by someone, hampers the ability to be curious and creative. These people do not want to destroy, rather they want to have access to new technology, software, or information. These creations are considered an art form, and are looked upon much like an artist views a painting. References Consulted Internet. http://www.ling.umu.se/~phred/hackfaq.txt Internet. http://www.jargon.com/~backdoor Internet. http://www.cyberfractal.com/~andes.html f:\12000 essays\technology & computers (295)\What Should And Shouldnt Computers Be Allowed To Run .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computers have always scared people. Not just because they can be confusing and hard to operate, but also because how they affect peoples everyday lives. What jobs should highly advanced computers be able to run? This question can involve ethics, privacy, security, and many other topics. What jobs can and can't we leave to the computer? As computers grow more and more advanced, not to mention complicated, so grows the number of job applications that can be filled by computers. But can we leave a job such as doctor to a highly advanced computer system? There are a great deal of moral issues involving that. What would happen if the doctor made a mistake? Could you sue the computer? What about the computer programmer? One error in the program could mean death for a patient. One job that I'm sure many people would give to a computer if they had the chance would be a lawyer. This eliminates the problem that occurs when someone with money is in trouble. They buy the best lawyer money can buy, but the person without any money cannot afford the great lawyers the other guy has. With this system, one single lawyer program could be provided to everyone so that the process of dispensing justice is much more fair. What about a judge and jury? Could a computer replace them? Is it right for a computer to pronounce sentence on an individual? Because computers don't have any kind of actual thought or will, some jobs would be perfect for computers. Security would be a good job for a computer to handle. People like their privacy and don't want to be watched over by someone all the time. If computers could tell if a crime is happening without a human to point it out, it might be alright to install these systems everywhere to detect crimes taking place without interfering with someone's privacy. I'm not talking about "Big Brother" from 1984, but something that would be fair to everyone. There is also the problem of changing jobs due to advancements in computer technology. There will be the same number of jobs available, but not at the same levels. More education will be needed for these new jobs. Computers might take away quite a few jobs from people doing manual labor on an assembly line, but at the same time, if something breaks down, there will have to be someone to come in and fix it. This is the affect computers will have as they become more and more advanced. The only problem with this is that some people may be unwilling to change. It would be hard for someone who has worked in manual labor all their life to suddenly become a computer technician. That is one of the costs we must have to live with though if there are to be advancements. But what about even further into the future? Will by that time, computers be so advanced that they can fix themselves and "evolve" on their own? Certainly then there would be job scarcity due to these technological advancements. f:\12000 essays\technology & computers (295)\Why ARJ.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Compuer studies. WHY_ARJ.DOC Jan 1997 This document describes the benefits of ARJ. ARJ is now a trend setter in archivers with other archivers following suit. You can find reviews of ARJ in the following magazine articles: Computer Personlich, June 12, 1991, Leader of the Pack, Bernd Wiebelt and Matthias Fichtner. In this German magazine, ARJ 2.0 was named Test Sieger (Test Winner) over six other archivers including PKZIP and LHA. Compression, speed, documentation, and features were compared. PC Sources, July 1991, Forum, Barry Brenesal, "A new challenger, ARJ 2.0, not only offers the speed of PKZIP, but also has the best compression rate of the bunch." Computer Shopper, September 1991, Shells, Bells, and Files: Compressors for All Cases, Craig Menefee. "ARJ ... is extremely fast and produces excellent compression; it ... has a rich set of options. ... This is a mature technology, and any of these programs will do a fine and reliable job." PC Magazine, October 15, 1991, Squeeze Play, Barry Simon. "Jung has combined that foundation with academic research to produce an impressive product. ... If your main criterion is compressed size, ARJ will be one of your two main contenders, along with LHA." SHAREWARE Magazine, Nov-Dec 1991, Fall Releases, Joseph Speaks. "Don't tell the creators of ARJ that PKZIP is the standard for data compression. They probably already know. But that hasn't stopped them from creating a data compression utility that makes everyone - even the folks at PKWare - sit up and take notice. ... but compression statistics don't tell the whole story. The case for using ARJ is strengthened by new features it debuts." BOARDWATCH Magazine, December 1991, ARCHIVE/COMPRESSION UTILITIES. "This year's analysis rendered a surprise winner. Robert K. Jung's ARJ Version 2.22 is a relatively new compression utility that offers surprising performance. The program emerged on the scene within the past year and the 2.22 version was released in October 1991. It rated number one on .EXE and database files and number two behind LHarc Version 2.13 in our directory of 221 short text files." INFO'PC, October 1992, Compression de donn‚es: 6 utilitaires du domaine public, Thierry Platon. In this article, the French magazine awarded ARJ 2.20, the Certificat de Qualification Labo-tests InfoPC. PC Magazine, March 16, 1993, PKZIP Now Faster, More Efficient, Barry Simon. "One of the more interesting features is the ability to have a .ZIP file span multiple floppy disks, but this feature is not nearly as well implemented as in ARJ." ARJ FEATURES: 1) Registered users receive technical support from a full-time software author with over FIFTEEN years of experience in technical support and software programming. And YES, ARJ is a full-time endeavor for our software company. ARJ and REARJ have proven to be two of the most reliable archiver products. We test our BETA test releases with the help of thousands of users. 2) ARJ provides excellent size compression and practical speed compared to the other products currently available on the PC. ARJ is particularly strong compressing databases, uncompressed graphics files, and large documents. One user reported that in compressing a 25 megabyte MUMPS medical database, ARJ produced a compressed file of size 0.17 megabytes while LHA 2.13 and PKZIP 1.10 produced a compressed file of 17 plus megabytes. 3) Of the leading archivers, only ARJ provides the capability of archiving files to multiple volume archives no matter what the destination media. ARJ can archive files directly to diskettes no matter how large or how numerous the input files are and without requiring EXTRA disk space. This feature makes ARJ (DEARJ) especially suitable for distributing large software packages without the concerns about fitting entire files on one diskette. ARJ will automatically split files when necessary and will reassemble them upon extraction without using any EXTRA disk space. This multiple volume feature of ARJ makes it suitable as a "cheap" backup utility. ARJ saves pathname information, file date-time stamps, and file attributes in the archive volumes. ARJ can also create an index file with information about the contents of each volume. For systems with multiple drives, ARJ can be configured to save the DRIVE letter information, too. Files contained entirely within one volume are easily extracted using just the one volume. There is no need to always insert the last diskette of the set. In addition, the ARJ data verification facility unique to ARJ among archivers helps ensure reliable backups. 4) The myriad number of ARJ commands and options allow the user outstanding flexibility in archiver usage. No other leading PC archiver gives you that flexibility. Here are some examples of ARJ's flexibility. a) Search archives for text data without extracting the archives to disk. b) Save drive letter and pathname information. c) Re-order the files within an ARJ archive. d) Merge two or more ARJ archives without re-compressing files. e) Extract files directly to DOS devices. f) Synchronize an archive and a directory of files with just a few commands. g) Compare the contents of an archive and a directory of files byte for byte without extracting the archive to disk. h) Allow duplicates of a file to be archived producing generations (versions) of a file within an archive. i) Display archive creation and modification date and time. j) And much more. 5) ARJ provides ARJ archive compatibility from revision 1.00 to now. In other words, ARJ version 1.00 can extract the files from an archive created by the current version of ARJ and vice-versa. 6) ARJ provides the facility to store EMPTY directories within its archives. This makes it easier to do FULL backups and also to distribute software products that come with EMPTY directories. 7) Both ARJ self-extracting modules provide default pathname support. That means that you can build self-extracting archives of software directories containing sub-directories. The end user of the self-extracting archive does not have to type any command line options to restore the full directory structure of the software. This greatly simplifies software distribution. 8) The ARJ archive data structure with its header structure and 32 bit CRC provide excellent archive stability and recovery capabilities. In addition, ARJ is the only archiver that allows you to test an archive during an archive process. With other archivers, you may have already deleted the input files with a "move" command before you could test the built archive. In addition, the test feature allows one to select an actual byte for byte file compare with the original input files. This is especially useful for verifying multi-megabyte files where a 32 bit CRC compare would not provide sufficient reliability. 9) ARJ provides an optional security envelope facility to "lock" ARJ archives with a unique envelope signature. A "locked" ARJ archive cannot be modified by ARJ or other programs without destroying the envelope signature. This provides some level of assurance to the user receiving a "locked" ARJ archive that the contents of the archive are intact as the "signer" intended. 10) ARJ has MS-DOS 3.x international language support. This makes ARJ more convenient to use with international alphabets. 11) ARJ has many satisfied users in countries all over the world. ARJ customers include the US government and many leading companies including Lotus Development Corp. f:\12000 essays\technology & computers (295)\Why stick to Qwerty.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computer Science 10 :), Why we should stick to Qwerty.. The Qwerty keyboard - named Qwerty because the letters q, w, e, r, t, y are arranged next to each other - has been the universal standard since the beginning of the 1890s. Since then, there have been many proposals by other keyboard makers to market products that would enable users to type faster. Other proposals put the most frequently used letters - dhiatensor - in the middle row.i Although these keyboards enable users to type far faster than the qwerty keyboard, they are rarely sold. There are several reasons for this. First, there is no need for the regular users to type any faster than at the current speed. Second, for the people whose job require fast typing, the new keyboards can lead to bigger health problems that develop from continuous typing. Third, and most importantly, standardization has led the qwerty keyboards to firmly hold the position as the keyboard. There are major differences between the two types of keyboard users; the regular users and the other typists. The regular users are people who uses the keyboard for word processing, e-mailing, and internet; there is not much of a need for them to type extremely fast. They do not type mechanically but rather based on their thought, and thinking takes time. In other words, faster keyboards are irrelevant for them because they are not continuously typing. They need to think what they are going to write, one sentence one after another. On the other hand, the typists whose job is simply to type, do so continuously. They also happen to be the major victims of repetitive Strain Injury (RSI) which is in large part caused by continuously stroking the keyboards. In an article about RSI, Huff explains the changes that the companies are undergoing to become more productive: Many work practices are changing with automation to increase productivity. These include fewer staff, heavier workloads, more task specialization, faster pacing of work, fewer rest breaks, more overtime, more shift work and nonstandard hours, and more piece work and bonus systems. These work practices can entail very prolonged rapid or forceful repetitive motions leading to fatigue and overuse of muscles.ii Because RSI is a major problem to the typists, it would be a suicidal move for them to adopt faster typable keyboards. More of them will develop RSI. As for the companies that hire these typists, not only will the frequency of RSI development increase, the amount of money that the companies have to compensate to the employees who develop RSI will also increase. The fact that the qwerty keyboard is less efficient presents typists from getting more serious health problems. Finally, the role of standardization greatly influences where the qwerty stand in the keyboard market. Once the qwerty was standardized, no other types of keyboards could enter into competition regardless of how much more efficient they were. That is because a standardized layout enables users to have to know just one kind of layout. Keyboard layout is like different languages. If there are different languages being spoken when people are trying to communicate with each other, it becomes very difficult to understand. The communication would be very inefficient. What if a new keyboard becomes standardized? Navy studies in the 1940s showed that the change from qwerty to a more efficient keyboard would pay for itself within 10 days.iii However, this study shows the result from the corporation's view. Although corporations will certainly be able to make more money out of same amount of time by adopting the new keyboard, there are other factors that are not taken into account - human cost. If the new, more efficient keyboards are to be standardized, there would be enormous spending on reeducation, relearning, repurchasing, and replacement. The cost of doing this would be enormous. In short, the qwerty keyboard is efficient enough for people to use. It's fast enough for regular users, and it's slow enough for typists to avoid further health problems. And, attempt to standardize a new keyboard would be extremely difficult and expensive. Yet, people might not even have to concern themselves with the keyboards anymore soon. The advancement of technology keeps bringing wonders to the world. In near future, voice recognition programs using microphones, might replace keyboards. Then, RTI - Repetitive Talking Injury - might be a big issue. Who knows? i Huff, C., "Putting technology in its place" in Social Issues in Computing, Huff, C. and Finholt T. (Eds), McGraw Hill. 1994, pp. 2. ii Huff, C., "Computing and your health" in Social Issues in Computing, Huff, C. and Finholt T. (Eds), McGraw Hill. 1994, pp. 103-104. iii Huff, C., "Putting technology in its place" in Social Issues in Computing, Huff, C. and Finholt T. (Eds), McGraw Hill. 1994, pp. 3. f:\12000 essays\technology & computers (295)\Why you should purchase a PC.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Computers are capable of doing more things every year. There are many advantages to knowing how to use a computer, and it is important that everyone know how to use them properly. Using the information I have gathered, and my own knowledge from my 12 years of computer experience, I will explain the many advantages of owning a computer and knowing how to use a PC and I will attempt to explain why you should purchase a computer and learn how to use one properly. Webster's New World Compact Dictionary defines a computer as "an electronic machine that performs rapid, complex calculations or compiles and correlates data" ("Computer."). While this definition gives one a very narrow view of what a computer is capable of doing, it does describe the basic ideas of what I will expand upon. We have been living through an age of computers for a short while now and there are already many people world wide that are computer literate. According to Using Computers: A Gateway to Information World Wide Web Edition, over 250 million Personal Computers (PC's) were in use by 1995, and one out of every three homes had a PC (Shelly, Cashman,& Waggoner, 138). Computers are easy to use when you know how they work and what the parts are. All computers perform the four basic operations of the information processing cycle: input, process, output, and storage. Data, any kind of raw facts, is required for the processing cycle to occur. Data is processed into useful information by the computer hardware. Most computer systems consist of a monitor, a system unit which contains the Central Processing Unit (CPU), a floppy-disk drive, a CD-ROM drive, speakers, a keyboard, a mouse, and a printer. Each component takes a part in one of the four operations. The keyboard and mouse are input devices that a person uses to enter data into the computer. From there the data goes to the system unit where it is processed into useful information the computer can understand and work with. Next the processed data can be sent to storage devices or to output devices. Normally output is sent to the monitor and stored on the hard-disk or to a floppy-disk located internal of the system unit. Output can also be printed out through the printer, or can be played through the speakers as sound depending on the form it takes after it is processed. Once you have grasped a basic understanding of the basic parts and operations of a computer, you can soon discover what you can do with computers to make life easier and more enjoyable. Being computer literate allows you to use many powerful software applications and utilities to do work for school, business, or pleasure. Microsoft is the current leading producer of many of these applications and utilities. Microsoft produces software called operating systems that manage and regulate the information processing cycle. The oldest of these is MS-DOS, a single user system that uses typed commands to initiate tasks. Currently Microsoft has available operating systems that use visual cues such as icons to help enter data and run programs. These operating systems are ran under an environment called a Graphical User Interface (GUI's). Such operating systems include Windows 3.xx, Windows 95, and Windows NT Workstation. Windows 95 is geared more for use in the home for productivity and game playing whereas Windows NT is more business orientated. The article entitled "Mine, All Mine" in the June 5, 1995 issue of Time stated that 8 out of 10 PC's worldwide would not be able to start or run if it were not for Microsoft's operating systems like MS-DOS, Windows 95, and Windows NT (Elmer-Dewitt, 1995, p. 50). By no means has Microsoft limited itself to operating systems alone. Microsoft has also produced a software package called Microsoft Office that is very useful in creating reports, data bases, spreadsheets, presentations, and other documents for school and work. Microsoft Office: Introductory Concepts and Techniques provides a detailed, step-by-step approach to the four programs included in Microsoft Office. Included in this package are Microsoft Word, Microsoft Excel, Microsoft Access, and Microsoft PowerPoint. Microsoft Word is a word processing program that makes creating professional looking documents such as announcements, resumes, letters, address books, and reports easy to do. Microsoft Excel, a spreadsheet program, has features for data organization, calculations, decision making, and graphing. It is very useful in making professional looking reports. Microsoft Access, a powerful database management system, is useful in creating and processing data in a database. Microsoft PowerPoint is ". . a complete presentation graphics program that allows you to produce professional looking presentations" (Shelly, Cashman, & Vermaat, 2). PowerPoint is flexible enough so that you can create electronic presentations, overhead transparencies, or even 35mm slides. Microsoft also produces entertainment and reference programs. "Microsoft's Flight Simulator is one of the best selling PC games of all time" (Elmer-Dewitt, 50). Microsoft's Encarta is an electronic CD-ROM encyclopedia that makes for a fantastic alternative to 20 plus volume book encyclopedias. In fact, it is so popular, it outsells the Encyclopedia Britannica. These powerful business, productivity, and entertainment applications are just the beginning of what you can do with a PC. Knowing how to use the Internet will allow you access to a vast resource of facts, knowledge, information, and entertainment that can help you do work and have fun. According to Netscape Navigator 2 running under Windows 3.1, "the Internet is a collection of networks, each of which is composed of a collection of smaller networks" (Shelly, Cashman, & Jordan, N2). Information can be sent over the Internet through communication lines in the form of graphics, sound, video, animation, and text. These forms of computer media are known as hypermedia. Hypermedia is accessed through hypertext links, which are pointers to the computer where the hypermedia is stored. The World Wide Web (WWW) is the collection of these hypertext links throughout the Internet. Each computer that contains hypermedia on the WWW is known as a Web site and has Web pages set up for users to access the hypermedia. Browsers such as Netscape allow people to "surf the net" and search for the hypermedia of their choice. There are millions of examples of hypermedia on the Internet. You can find art, photos, information on business, the government, and colleges, television schedules, movie reviews, music lyrics, online news and magazines, sport sights of all kinds, games, books, and thousands of other hypermedia on the WWW. You can send electronic mail (E-Mail), chat with other users around the world, buy airline, sports, and music tickets, and shop for a house or a car. All of this, and more, provides one with a limitless supply of information for research, business, entertainment, or other personal use. Online services such as America Online, Prodigy, or CompuServe make it even easier to access the power of the Internet. The Internet alone is almost reason enough to become computer literate, but there is still much more that computers can do. Knowing how to use a computer allows you to do a variety of things in several different ways. One of the most popular use for computers today is for playing video games. With a PC you can play card games, simulation games, sport games, strategy games, fighting games, and adventure games. Today's technology provides the ultimate experiences in color, graphics, sound, music, full motion video, animation, and 3D effects. Computers have also become increasingly useful in the music, film, and television industry. Computers can be used to compose music, create sound effects, create special effects, create 3D life-like animation, and add previous existing movie and TV footage into new programs, as seen in the movie Forrest Gump. All this and more can be done with computers. There is truly no time like the present to become computer literate. Computers will be doing even more things in the future and will become unavoidable. Purchasing and learning about a new PC now will help put PC's into the other two-thirds of the homes worldwide and make the transition into a computer age easier. Works Cited "Computer." Webster's New World Compact School and Office Dictionary. 1995. Elmer-Dewitt, P. "Mine, All Mine." Time Jun. 1995: 46-54. Shelly, G., T. Cashman, and K. Jordan. Netscape Navigator 2 Running Under Windows 3.1. Danvers: Boyd & Fraser Publishing Co., 1996. Shelly, G., T. Cashman, and M. Vermaat. Microsoft Office Introductory Concepts and Techniques. Danvers: Boyd & Fraser Publishing Co., 1995. Shelly, G., T. Cashman, G. Waggoner, and W. Waggoner. Using Computers: A Gateway to Information World Wide Web Edition. Danvers: Boyd & Fraser Publishing Co., 1996. f:\12000 essays\technology & computers (295)\Will Computers Control Humans In The Future .TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Will computers control humans in the future? People always tend to seek the easy way out looking for something that would make their lives easier. Machines and tools have given us the ability to do more in less time giving us, at the same time, more comfort. As the technology advances, computers become faster and more powerful. These new machines are enabling us to do more in less time making our lives easier. The increased use of computers in the future, however, might have negative results and impact on our lives. In the novel Nine Tomorrows Isaac Asimov often criticizes our reliance on computers by portraying a futuristic world where computers control humans. One of the images which Asimov describes in the book is that humans might become too dependent on computers. In one of the stories, Profession, Asimov writes about people being educated by computer programs designed to educate effortlessly a person. According to the Profession story people would no longer read books to learn and improve their knowledge. People would rely on the computers rather than "try to memorize enough to match someone else who knows" (Nine Tomorrows, Profession 55). People would not chose to study, they would only want to be educated by computer tapes. Putting in knowledge would take less time than reading books and memorizing something that would take almost no time using a computer in the futuristic world that Asimov describes. Humans might began to rely on computers and allow them to control themselves by letting computers educate people. Computers would start teaching humans what computers tell them without having any choice of creativity. Computers would start to control humans' lives and make humans become too dependent on the computers. Another point that is criticized by Asimov is the fact that people might take their knowledge for granted allowing computers to take over and control their lives. In a story called The Feeling of Power, Asimov portrays how people started using computers to do even simple mathematical calculations. Over a long period of time people became so reliable on computers that they forgot the simplest multiplication and division rules. If someone wanted to calculate an answer they would simply use their pocket computer to do that (The Feeling of Power 77). People became too independent from the start making them forget what they have learned in the past. People in the story The Feeling of Power would take for granted what they have learned over centuries of learning and chose computers because of their ability to do their work faster. The lack of manual mathematics, which people chose to forget in the story, caused computers to be the ones to solve simple mathematic problems for the people taking control of the humans by doing the work for them (The Feeling of Power 81-82). The reliance of computers went to such an extent that even Humans began to use computers in all fields of study and work allowing computers to control their lives by taking over and doing everything for them. According to another story in the book, Asimov also describes how computers would be able to predict probabilities of an event, future. In the story All the Troubles of the World one big computer predicted crime before it even happened, allowing the police to take the person who was going to commit the crime and release him/her after the danger has passed (All The Troubles of The World 144-145). This computer, called Multivac, controlled humans by telling the authorities about who was going to commit a crime causing someone to be imprisoned until the danger has passed. It was the computer that made the decision of someone's freedom or imprisonment and that controlled others to arrest a person it suspected of committing a crime controlling his/her destiny. The decision of imprisoning someone for a crime a person did not commit was all in the hands of a computer. It was the computer that controlled humans and their destiny and controlling other humans who believed in everything that computer told them. Multivac could not only predict the future but it also could answer many questions that would normally embarrass people if they would have to ask someone else about it. Multivac could access its vast database of trillions of pieces of knowledge and find the best solution for one's problem (All The Troubles of The World 153). All the people believed that Multivac knows the best and allowed a computer to control their lives by following the solutions Multivac had given them (All the Troubles of The World 153). Humans followed a computer's solution to a problem they could not solve themselves allowing a computer to take control over their lives not allowing them to think for themselves. In the Nine Tomorrows, Isaac Asimov often criticizes our reliance on computers. The author predicts that computers will increase their role in the future while the technology advances. Computers will become faster and people will want to use them more to make their lives easier. Yet, just like to any good side there is a bad side. Asimov reflects in his writing that humans might depend on the computers so much that they will allow them to control their lives. f:\12000 essays\technology & computers (295)\william gibson and the internet.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Introduction The words ³Internet² and ³world wide web² are becoming everyday use these days, it has exploded into the mass market of information and advertising. There are bad points about the ³net² as well as good points, this relatively new medium is growing at such a rate that the media have to take it seriously. This new form of communication was mainly populated by small groups of communities, but now that it is getting much easier to access the web these groups are growing. The word Cyberpunk is nothing new in the world of the ³net² and to science fiction readers , and it is this term which names most of the online communities . Within the Cyberpunk cultures there are sub cultures such as hackers, phreaks ,ravers etc.. all have a connection with new technologies. The term Cyberpunk was originated in Science Fiction Literature, writers such as William Gibson tell stories of future worlds, cultures and the Internet. it is william gibson and the cyberpunks who have carried out some of the most important mappings of our present moment and its future trends during the past decade. The present, in these mappings, is thus viewed from the persceptive of a future that is visible from within the experiences and trends of the current moment, from this perpscetive, cyberpunk can be read as a sort of social theory. Chapter 1 Internet history The Internet is a network of computer networks, the most important of which was called ARPANET(Advanced Research Projects Agency NETwork), a wide area experimental network connecting hosts and terminal servers together. Rules were set up to supervise the allocation of addresses and to create voluntary standards for the network. The ARPANET was built between October and December 1969 by a US company called Bolt, Beranak and Newman (BBN), which is still big in the Internet world. It had won a contract from the US Government's Department of Defence Advanced Research Projects Agency , or ARPA, to build a network that would survive a nuclear attack. Only four government mainframe computers were originally linked up, Unfortunately, ARPANET was also dependent on the involvement of hundreds of US computer scientists. Because the ARPANET was a military project, it was managed in true military style - the project manager appointed by ARPA gave the orders and they were carried out. It was therefore easy to tell who "ran" the network. By 1972 it had grown to 37 mainframe computers. At the same time, the way in which the network was being used was changing. As well as using the system to exchange important, but boring, military information, ARPANET users started sending e-mail - to each other by means of private mail boxes. By 1983 ARPANET had grown to such an extent that it was felt that the military research component should be moved to a separate network, called MILNET. In 1987 the system was opened up to any educational facility, academic researcher or international research organisation who wanted to use it. As local area networks became more pervasive, many hosts became gateways to local networks. A network layer, to allow the inter operation of these networks was developed and called IPA (Internet Protocol). Over time other groups created long haul IP based networks (NASA, NSF, states...). These nets too, inter-operate because of IP. The collection of all of these inter operating networks is the Internet. Up until 1990 the Internet was only a complicated and uninteresting text format of communication and most of the people using the net were either Computer programmers, students, Hackers, Societies, Governments officials and a few artists interested the digital media. Everything changed in 92 when a British programmer came up with "Mosaic", a text and graphic based window (web browser) into the net, this programme was simple to use. The basic structure was in simple page form, Just click on a button, word or picture and you could cross half the world in seconds, it was also simple to construct a page. Over the last couple of years, anyone who had a computer and Internet account has created their own "Web page". The growth of the Internet, those machines connected to the NSFNET backbone has been extraordinary. In 1989, the number of networks attached to the NSFNET/Internet increased from 346 to 997, data traffic increased five-fold. The latest estimate, is that 200,000 to 400,000 main computers are directly connected to NSFNET, with perhaps a total of eleven million individuals able to exchange information freely. The Internet is still growing and companies are developing new tools and programmes to speed up the communications so that immense amounts of data can be transferred in seconds. "The future of the 20th century, of the 21st century, will be the net. Its awesome. But on the net, you still have to have someone on the other side. The poor nerd who sits in front of the computer just talking to themselves - that's kind of sad. It's the contact that's important, interpersonal, interactive communication." [T.Leery (observer 29/5/94) p16] Internet Cultures Over the years since the Internet first began, many clubs, organisations, cultures and societies have grown and congregated on the net. This is probably because to many users it is a cheap form (even free) of world wide communication, the new technology has link with their ideas and also because of the freedom of expression the Internet gives. No single government body or organisation owns the net and because of its size, no one can fully govern and censor the Internet. So called "hackers" also part of the "Cyberpunk" group, were one of the first groups of individuals known on the Internet, these were mostly male students studying computer science, trying to break into government computers or anywhere they were not supposed to be. Most hackers live by this set of rules, First, access to computers should be unlimited and total: "Always yield to the Hands-On Imperative!". Second, all information should be free. Third, mistrust authority and promote decentralisation. Fourth, hackers should be judged by their prowess as hackers rather than by formal organisational or other irrelevant criteria. Fifth, one can create art and beauty on a computer. Finally, computers can change lives for the better. One group i came across in an article call themselves the "Extropians", they want to be immortal and travel through space and time. They are also libertarians who want to privets the oceans and air. One member Jay Prime Positive wants to upload his consciousness to a computer "I'd probably want to spend most of my time in data space......i imagine having multiple bodies and multiple copies of myself. I have problems with gender identification, so I'd definitely have a female body in there somewhere". The group have many idea's of the future. You perhaps never considered the idea of setting loose molecule-sized robots in your body to clean out your arteries.(see nanotechnology). A floating free state banged together out of old oil tankers (similar to the sprawl described in Gibson's "Mona Lisa overdrive", a place where freedom and unrestrained intellect could reign and you could finally get the government and tax man off your back. the Extropians want to go beyond the limits of nature and biology and move on up to the stars, they believe that computers have kick started the human evolution. Chapter 2 Cyberspace The term "Cyberspace" was first coined by the sci-fi writer William Gibson in his 1984 novel "Neuromancer". Gibson first identified the emergence of Cyberspace as the most recent moment in the development of electromechanical communications, telematics and virtual reality. Cyberspace, as Gibson saw it, is the simultaneous experience of time, space, and the flow of multi-dimensional, pan-sensory data: All the data in the world stacked up like one big neon city, so you could cruise around and have a kind of grip on it, visually anyway, because if you didn't, it was too complicated, trying to find your way to the particular piece of data you needed. Cyberspace. "A con sensual hallucination experienced daily by billions of legitimate operators, in every nation... A graphical representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the non space of the mind, clusters and constellations of data. Like city lights, receding..." - William Gibson, Neuromancer. At the core of Cyberspace is the Internet. The psychologist/guru Timothy Leery interviewed by David Gale in 1991, is very clear about Cyberspace : "What were talking about is electronic real estate, a whole electronic reality. The problem we have is to organise the great continents of data that will soon become available. All the movies , all the TV , all the libraries, all recordable knowledge... These are the vast natural crude oil reserves waiting to be tapped, In the 15th century we explored the planet, now we must prepare once more to chart, colonise and open up a whole new world of data. Software becomes the maps and guides into that terrain". The interesting thing about Cyberspace is the way it creates the idea of a community. Every subculture needs an image of an outsider's community to cling to, to run to. For the Cyberpunk, this community doesn't actually have a place. It can be accessed everywhere by modem, but its the nearest thing on earth. Cyberpunk subculture is the first subculture which doesn't have a particular place of congregation . There are now hundreds of bulletin boards around the world which have a Cyberpunk style, where young cyberpunks discuss the latest hardware and software. It is familiar to most people as the "place" in which a long-distance telephone conversation takes place. But it is also the treasure trove for all digital or electronically transferred information, and, as such, it is the place for most of what is now commerce, industry, and human interaction Cyberpunk History Cyberpunk literature, in general, deals with unimportant people in technologically-enhanced cultural "systems". In Cyberpunk stories' settings, there is usually a "system" which dominates the lives of most "ordinary" people, be it an oppressive government, a group of large, corporations, or a fundamentalist religion. These systems are enhanced by certain technologies , particularly "information technology" (computers, the mass media), making the system better at keeping those within it inside it. Often this technological system extends into its human "components" as well, via brain implants, prosthetic limbs, cloned or genetically engineered organs, etc. Humans themselves become part of "the Machine". This is the "cyber" aspect of Cyberpunk. "Cyberpunk hit the front page of the New York Times when some young computer kids were arrested for cracking a government computer file. The Times called the kids "cyberpunks" From there, the performers involved in the high-tech-oriented radical art movement generally known as "Industrial" " [ R.U Sirius (Mondo 2000) 64 ] In the mid-'80s Cyberpunk emerged as a new way of doing science fiction in both literature and film. The first book "Neuromancer"; the most important film, "Blade Runner". "what's most important to me is that Neuromancer is about the present. its not really about an imagined future....." [William Gibson (MONDO 2000) 68] William Gibson is widely considered to be the father of "Cyberpunk", dark novels about hi-tech computer bohemians and underground renegades. His first novel, "Neuromancer", bears the distinction of winning the Hugo, Nebula, and Philip K. Dick awards. The first to win all three. William Gibson parlayed off the success of his first SF 'Cyberpunk' blockbuster Neuromancer to write a more complex, engaging novel in which these two worlds are rapidly colliding. In his novel Count Zero, we encounter teenage hacker Bobby Newmark, who goes by the handle "Count Zero." Bobby on one of his treks into Cyberspace runs into something unlike any other AI(artificial intelligence) he's ever encountered - a strange woman, surrounded by wind and stars, who saves him from 'flatlining.' He does not know what it was he encountered on the net, or why it saved him from certain death. Later we meet Angie Mitchell, the mysterious girl whose head has been 'rewired' with a neural network which enables her to 'channel' entities from Cyberspace without a 'deck' - in essence, to be 'possessed'. Bobby eventually meets Beauvoir, a member of a Voudoun/cyber sect, who tells him that in Cyberspace the entity he actually met was Erzulie, and that he is now a favourite of Legba, the lord of communication... Beauvoir explains that Voudoun is the perfect religion for this era, because it is pragmatic - "It isn't about salvation or transcendence. What it's about is getting things done ." Eventually, we come to realise that after the fracturing of the AI Wintermute, who tried to unite the Matrix, the unified being split into several entities which took on the character of the various Haitian loa, for reasons that are never made clear. Now other writers like Bruce Sterling and Pat Cadigan have emerged. There is even a 'overground' Cyberpunk magazine called Mondo 2000, as well as a host of tiny desktop published fanzines. A fundamental theme running through most Cyberpunk literature is that (in the near future Earth)commodities are unimportant. Since anything can be manufactured, very cheaply, manufactured goods (and the commodities that are needed to create them) are no longer central to economic life. The only real commodity is information. The bleak, 'no future' landscape of punk rock and post-apocalyptic movies like Blade runner and Mad Max, and imagined a way to escape from the street-level violence these films referred to. Along with Neuromancer, Blade Runner together set the boundary conditions for emerging Cyberpunk: a hard-boiled combination of high tech and low life. As the William Gibson phrase puts it, "The street has its own uses for technology." So compelling were these two narratives that many people then and now refuse to regard as Cyberpunk anything stylistically and thematically different from them. Literary Cyberpunk had become more than Gibson, and Cyberpunk itself had become more than literature and film. In fact, the label has been applied variously, promiscuously, often cheaply or stupidly. Kids with modems and the urge to commit computer crime became known as "cyberpunks or Hackers", however, so did urban hipsters who wore black, read Mondo 2000, listened to "industrial" pop, and generally subscribed to techno-fetishism. Gibson had become more han just another sf writer; he was a cultural icon of sorts. [Gareth Branwyn] posted the following description of the Cyberpunk world view to the MONDO 2000 conference of the WELL (see glossary): A) The future has imploded onto the present. there was no nuclear Armageddon. There's too much real estate to lose . The new battle field is people's mind's. B) The megacorp's are the new governments. C) The U.S is a big bully with lackluster economic power. D) The world is splintering into a trillion subcultures and designer cults with their own languages, codes, and lifestyles. E) Computer-generated info-domains are the next frontiers. F) there is better living through chemistry. G)Small groups or individual "console cowboys" can wield tremendous power over governments. corporations, etc. H) The coalescence of a computer "culture" is expressed in self-aware computer music , art, virtual communities, and a hacker/street tech subculture. The computer nerd image is passe', and people are not ashamed anymore about the role the computer has in this subculture. The computer is a cool tool, a friend , important, human augmentation. I) We're becoming cyborg's. Our tech is getting smaller, closer to us and it will soon merge with us. J) [Some attitudes that seem to be related] *Information wants to be free. *Access to computers and anything which may teach you something about how the world works should be unlimited and total. *Always yield to the hands-on imperative. *mistrust authority. *promote decentralisation. *Do it yourself. *Fight the power. *Feed the noise back into the system. *Surf the edges. [(MONDO 2000)65-66 ] Cyberpunk Culture Science fiction deals with issues as diverse as the clash between religious fundamentalism and the consumer society, abortion and the church, life support for the terminally ill. or the freedom of the individual in the age of on-line databases. William Gibson, whose brave new world is seen as in a state of impermanent decay compared to "Cyberspace",The "virtual world" already in embryonic existence in the Internet global computer network. In Gibson's latest novel, Virtual Light, a pair of designer sunglasses holds all the data on plans for property scam involving the rebuilding of post-quake San Francisco. Gibson's "heroes" are a handful of neo-punks and derelicts. His Future world is a grim approximation of today's social and technological trends, a graphic debunking of the progress principle. In the 20th century, the Net is only accessible via a computer terminal, using a device called a modem to send and receive information. But in 2013, the Net can be entered directly using your own brain, neural plugs and complex interface programs that turn computer data into perceptual events" . In several places, reference is made to the military origin of the Cyberspace interfaces: "You're a console cowboy. The prototypes of the programs you use to crack industrial banks were developed for [a military operation]. For the assault on the Kirensk computer nexus. Basic module was a Nightwing microlight, a pilot, a matrix deck, a jockey. We were running a virus called Mole. The Mole series was the first generation of real intrusion programs." [Neuromancer]. "The matrix has its roots in primitive arcade games... early graphics programs and military experimentation with cranial jack" [Neuromancer]. Gibson also assumes that in addition to being able to "jack in" to the matrix, you can go through the matrix to jack in to another person using a "simstim" deck. Using the simstim deck, you experience everything that the person you are connected to experiences: "Case hit the simstim switch. And flipped in to the agony of a broken bone. Molly was braced against the blank grey wall of a long corridor, her breath coming ragged and uneven. Case was back in the matrix instantly, a white-hot line of pain fading in his left thigh." [Neuromancer]. The matrix can be a very dangerous place. As your brain is connected in, should your interface program be altered, you will suffer. If your program is deleted, you would die. One of the characters in Neuromancer is called the Dixie Flatline, so named because he has survived deletion in the matrix. He is revered as a hero of the cyber jockeys: 'Well, if we can get the Flatline, we're home free. He was the best. You know he died brain death three times.' She nodded. 'Flatlined on his EEG. Showed me the tapes.'" [Neuromancer]. Incidentally, the Flatline doesn't exist as a person any more: his mind has been stored in a RAM chip which can be connected to the matrix: Cyberpunk is fascinated by the media technologies which were hitting the mass market in the 80s. Desktop publishing, computer music and now desktop video are technologies taken up with enthusiasm by Cyberpunks.. The rapid evolution from video-games to virtual reality has been helped along by the hard core of enthusiasts eager to try out each generation of simulated experience. The multimedia convergence of the publishing industry, the computer industry, the broadcasting industry and the recording industry has a spot right at its centre called Cyberpunk, where these new product experiments find a critical but playful market. Cyberpunk is a product of the huge batch of technical and scientific universities created in the US to service the military industrial complex. Your typical Cyberpunk is white, middle class, and technically skilled. They are a new generation of white collar worker, resisting the yoke of work and suburban life for a while. They don't drop out, they jack in. They are a example of how each generation, growing up with a given level of media technology, has to discover the limits and potentials of that technology by experimenting with everyday life itself. In the case of Cyberpunk, the networked world of Cyberspace, the interactive world of multimedia and the new sensoria of virtual reality will all owe a little to their willingness to be the test pigs for these emergent technologies. There is also a tension in Cyberpunk between the military that produces technology and the sensibility of the technically skilled individual trained for the high tech machine. Like all subcultures, Cyberpunk expresses a conflict. On the one side is the libertarian idea that technology can be a way of wresting a little domain of freedom for people from the necessity to work and live under the constraints of today. On the other is the fact that the technologies of virtual reality, multimedia, Cyberspace would never have existed in the first place had the Pentagon not funded them as tools of war. On the one hand it is a drop out culture dedicated to pursing the dream of freedom through appropriate technology. On the other it is a ready market for new gadgets and a training ground for hip new entrepreneurs with hi-tech toys to market. Cyberpunk's fast crawl to the surface has included not only pop music (industrial, post industrial, techno pop, etc.), but also television (MTV, Saturday morning cartoons, the late "Max Headroom" series, etc.) and movies ("Total Recall," "Lawnmower Man," the Japanese "Tetsuo" series, etc.). A bi-monthly magazine called Wired, aimed in part at the Cyberpunk set and financed in part by MIT Media Lab director Nicholas Negroponte. And the principals of Mondo 2000 . "The micro technology that, in Cyberpunk, connects the streets to the multinational structures of information in Cyberspace also connects the middle-class structures of information in Cyberspace also connects the middle-class country to the middle-class city". [S.R Delany (Flame Wars) 198] Cyberpunk tends to fill some of us with uneasiness and even fear.The X Generation is made up of Slackers, Hackers (a.k.a. Phreakers, Cyberpunks, and Neuronauts). They are Ravers and techno- heads. According to most demographers, we are more street smart and pop-culture literate, and less versed in the classics, ethics, and formal education (especially in areas like geography, civics, and history: areas where we appear to be, in short, an academic disgrace.) We are said to have less ambition, less idealism, less morals, smaller attention spans, and less discipline than any previous generation of this century. We are the most aborted, most incarcerated, most suicidal, and most uncontrollable, unwanted, and unpredictable generation in history. (Or so claim the authors of 13th Generation. ). "The work of cyberpunks is paralleled throughout eighties pop culture : in rock video ; in the hacker underground; in the jarring street tech of hip-hop and scratch music...." [Bruce Sterling (MONDO 2000) 68] Cyberpunk and Technology In Gibson's world, Cyberspace is a con sensual hallucination created within the dense matrix of computer networks. Gibson imagines a world where people can directly jack their nervous systems into the net, vastly increasing the intimacy of the connection between mind and matrix. Cyberspace is the world created by the intersection of every jacked-in consciousness, every database and installation, every form of interconnected information circuit, in short, human or in-human. Cyberspace is no longer merely an interesting item in an inventory of ideas in Gibson's fiction. In Cyberspace: First Steps, a collection of papers from The First Conference on Cyberspace, held at the University of Texas, Austin, in May, 1990, Michael Benedikt defines Cyberspace as "a globally networked, computer-sustained, computer-accessed, and computer-generated, multidimensional, artificial, or 'virtual' reality." He admits "this fully developed kind of Cyberspace does not exist outside of science fiction and the imagination of a few thousand people;" however he points out that "with the multiple efforts the computer industry is making toward developing and accessing three-dimensionalized data, effecting real-time animation, implementing ISDN and enhancing other electronic information networks, providing scientific visualisations of dynamic systems, developing multimedia software, devising virtual reality interface systems, and linking to digital interactive television . . . from all of these efforts one might cogently argue that Cyberspace is 'now under construction.'" Cyberpunk in TV and Cinema One Film "WAR GAMES" was based on a college student who hacked into the Us defence computer and started a simulation program of a nuclear attack on Russia, which looked like the real thing to the Russians. In the near future a British film call "Hackers" is to be released, directed by Iain Softley (BackBeat). Also soon to be released is "The Net" starring Sandra Bullock (Speed) and a Gibson Cyberpunk thriller called "Johnny Mnemonic" a $26 million science fiction movie based on his short story, and starring Keanu Reeves as the main character. Directed by Robert Longo. The film also stars Ice-T, Dolph Lundgren, Takeshi Kitano (of the cult "Sonatine"), Udo Kier, Henry Rollins and Dina Meyer. William Gibson also wrote the screenplay of his original story which was published in the anthology "Burning Chrome". "Johnny Mnemonic" goes into wide release in Dec 1995. The film Blade Runner, loosely based on Dick's novel Do Androids Dream Of Electric Sheep, is set in early 21st century Los Angeles. Among the enormous human cultural diversity evident, five , synthetically designed organic robots - replicants - have escaped their slave status on an off-world colony. These replicants are the property of the Tyrell Corporation, and have extremely high levels of physically and mental development. The Tyrell Corporation, ensuring that the replicants do not develop the emotional capacity of their human masters genetically engineer a four- year life span. Tyrell Corporation, on the basis of this slavery, uses the market slogan 'More Human Than Human'. And like those who settled earth's New World in the seventeenth century, they expect slave labour." Whilst this commentary is certainly true, a further elaboration can be made on the technological nature of the replicants; they were, for all intents and purposes, a new life-form. "Max Headroom was the most amazingly Cyberpunk thing that's ever been on network TV. Max started out as an animated VJ for a British music-video channel. In order to introduce him, a short film was made.....Entertainment with all the corners filled in . I think that's what a lot of Cyberpunk writing is .......Television is the greatest Cyberpunk invention of all time" . [Steve Roberts (MONDO 2000) 76] Theories One man who has his own theory about the net is Kevin Kelly (exective editor of Wired), he combines ideas from chaos theory, cybernetics, current thinking on evolution and research into computerised artificial life with his own experience of on-line culture. His main argument is that we're 'the Neo-Biological Era'. The line between the made and the born is being blurred; machines are becoming biological and the biological is being engineered. The reason is that we have reached the limits of industrial thinking. Linear cause and effect logic is no good for figuring out the hugely complex systems (phone networks, global economies, the Internet) that we have created, so we've begun to look instead at natural systems. After years of tapping mother nature for food and raw materials, we're now mining her for ideas. One scenario of the Internet he is playing with is that the net might die. "You can imagine a situation in which there's 200 million people on the Internet trying to send E-mail messages and the whole thing just grinds to a halt. Its own success just kills it. In the meantime, a telephone companies steps in and offers E-mail for $5 a month, no traffic jams and its reliable. i hope it doesn't happen but it's a scenario one has to consider". eorge Gilder of the Hudson institute stated "there is about to be a revolution, born of nothing less than sand, glass and air, and yet it was one which would have an incalcuble effect upon us all. From sand will come microchips offering super computing power on slices of silicon smaller than a thumbnail and cheaper than a book. From glass will be fashioned fibre-optic cables that will flash information of any size at lighting speed. In the air, frequency bandwidths of practically limitless size and available at virtually no cost will permit the wireless transmission of any kind of digital data from anywhere to anywhere, instantly. Timothy leary the man who coined the phrase "turn on, tune in and drop out" in the 60's thinks that the future of the 20th and 21st century, will be the net."Its awesome. But on the net. you still have someone on the other side . The poor nerd who sits in front of the computer just talking to themselves - that's kind of sad. It's the contact that's important, interpersonal, interactive communication. We're hard wiring global consciousness, we're moving towards a global mind. a global village. Soon we'll develop a global language. People will communicate with pictures not words". Jean Baudrillard described the emergence of a new postmodern society organsied around simulation, in which models, codes, communication, information, and the media were the demiruges of a radical break with modern societies. Baudrillard's postmodern universe was also one of hyperreality, in which models and codes determined thought and behavior, and in which media of entertainment, information, and communication provided experience more intense and involving han the scenes of banal everything life. In this postmodern world, individuals abondoned the 'desert of the real' for the ecstasies of hyperreality and a new realm of computer, media, and technological experience. Visions of the Future Gibson's vision is of a multi-dimensional space inhabited by vast "data structures", where glowing and pulsing representations of data flow within the ubiquitous computer/ telecommunications networks of military and corporate memory banks.(see Johnny Mnemonic) During the 80's, the Cyberspace vision was being fleshed out in the work shops and laboratories of silicon space , of seeing it, being in it, touching and feeling it, flying through it and hearing it were being developed. The inter-relationship between the vision and the practical, working "virtual reality" machines (such as W industries ' Virtuality and VPL's Reality built for two) were on sale in both the us and Britain by 1990. By 1994 cheap headsets and programmes were available to mostly anyone. The Cyberpunk future includes the likes of a computer-generated artificial environment known as virtual reality. (Not so futuristic, perhaps: VR arcade games are already here.) It includes dreams of virtual sex. (Not so futuristic, either: text based "sex" already exists on computer networks. Call it Phone Sex: The Next Generation.) It includes further developments in robotics, artificial intelligence, even artificial life. More to the point of punk, it includes "smart drugs," legal substances that allegedly increase mental capacity. " someday be possible for mental functions to be surgically extracted from the human brain and transferred to computer software in a process he calls "transmigration". the useless body with its brain tissue would then be discarded, while consciousness would remain stored in computer terminals, or for the occasional outing, in mobile robots". [Hans Moravec, mind children : the future of robot and human intelligence(Cambridge, MA, 1988),108] Cyberpunk fiction characters are hard wired (see JohnnyMnemonic), jack into Cyberspace, plug f:\12000 essays\technology & computers (295)\William Henry Gates III.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ William Henry Gates III Chairman and Chief Executive Officer Microsoft Corporatio William (Bill) H. Gates is chairman and chief executive officer of Microsoft Corporation, the leading provider, worldwide, of software for the personal computer. Microsoft had revenues of $8.6 billion for the fiscal year ending June 1996, and employs more than 20,000 people in 48 countries. Background on Bill Born on October 28, 1955, Gates and his two sisters grew up in Seattle. Their father, William H. Gates II, is a Seattle attorney. Their late mother, Mary Gates, was a schoolteacher, University of Washington regent and chairwoman of United Way International. Gates attended public elementary school and the private Lakeside School. There, he began his career in personal computer software, programming computers at age 13. In 1973, Gates entered Harvard University as a freshman, where he lived down the hall from Steve Ballmer, now Microsoft's executive vice president for sales and support. While at Harvard, Gates developed the programming language BASIC for the first microcomputer -- the MITS Altair. In his junior year, Gates dropped out of Harvard to devote his energies to Microsoft, a company he had begun in 1975 with Paul Allen. Guided by a belief that the personal computer would be a valuable tool on every office desktop and in every home, they began developing software for personal computers. Gates' foresight and vision regarding personal computing have been central to the success of Microsoft and the software industry. Gates is actively involved in key management and strategic decisions at Microsoft, and plays an important role in the technical development of new products. A significant portion of his time is devoted to meeting with customers and staying in contact with Microsoft employees around the world through e-mail. Under Gates' leadership, Microsoft's mission is to continually advance and improve software technology and to make it easier, more cost-effective and more enjoyable for people to use computers. The company is committed to a long-term view, reflected in its investment of more than $2 billion on research and development in the current fiscal year. As of December 12, 1996, Gates' Microsoft stock holdings totaled 282,217,980 shares, currently selling at $95.25, as of Feb. 20th, 1997. Giving a rough estimate of total worth:$ 26,881,262,595 In 1995, Gates wrote The Road Ahead, his vision of where information technology will take society. Co-authored by Nathan Myhrvold, Microsoft's chief technology officer, and Peter Rinearson, The Road Ahead held the No. 1 spot on the New York Times' bestseller list for seven weeks. Published in the U.S. by Viking, the book was on the NYT list for a total of 18 weeks. Published in more than 20 countries, the book sold more than 400,000 copies in China alone. In 1996, while redeploying Microsoft around the Internet, Gates thoroughly revised The Road Ahead to reflect his view that interactive networks are a major milestone in human history. The paperback second edition has also become a bestseller. Gates is donating his proceeds from the book to a non-profit fund that supports teachers worldwide who are incorporating computers into their classrooms. In addition to his passion for computers, Gates is interested in biotechnology. He sits on the board of the Icos Corporation and is a shareholder in Darwin Molecular, a subsidiary of British-based Chiroscience. He also founded Corbis Corporation, which is developing one of the largest resources of visual information in the world-a comprehensive digital archive of art and photography from public and private collections around the globe. Gates also has invested with cellular telephone pioneer Craig McCaw in Teledesic, a company that is working on an ambitious plan to launch hundreds of low-orbit satellites around the globe to provide worldwide two-way broadband telecommunications service. In the decade since Microsoft has gone public, Gates has donated more than $270 million to charities, including $200 million to the William H. Gates Foundation. The focus of Gates' giving is in three areas: education, population issues and access to technology. Gates was married on Jan. 1, 1994 to Melinda French Gates. They have one child, Jennifer Katharine Gates, born in 1996. Times are changing fast. Three years ago, while President Bushs camp was mounting a direct-mail campaign unchanged from that of Reagan before him, the Clinton camp, host to a horde of so-called "computer whiz kids," all in their twenties, was developing a completely new set of election tactics, using personal computer networks and electronic mail, or "e-mail". Many of these twenty-some-odd-year-old mini-Clintons, who now occupy the White House, show up for work in sneakers, T-shirts, and jeans, and spend each day, from morn till night, tapping away at personal-computer keyboards. As I myself have often experienced of late, when you exchange business cards with an American you nearly always see, imprinted on the card along with the phone and fax numbers, an e-mail number, as well. When the person inquires, "What is your e-mail number?"and you reply, "I don't have one yet," you can catch the briefest glimmer in his eye, which seems to say, "A bit behind the times, aren't we?" The darling of this multimedia age is a man named Bill Gates. Won over by then Vice-Presidential candidate Gores promise to vigorously promote the "information superhighway," Gates, declaring himself a representative of Silicon Valley, donated a large amount of money to the Clinton campaign. The support of Bill Gates boosted the popularity of the Democratic Party. This year, Forbes Magazine's traditional annual list ranked this same Bill Gates, head of Microsoft Corp., as the worlds richest human being. Myths and legends about this youthful success story abound; he has already published an autobiography which, along with a critical biography of Gates, is being read by people all over the world. He is, in short, a super-famous man. Gates rear-echelon e-mail activities have been reprinted not only in America and Europe, but even, in translation, in Japanese newspapers. Gates has been known for some time as a political liberal and a strong supporter of the Democratic Party; lately, however, the word about town is that Gates and the Democratic Party have had a falling-out. The U.S. Department of Justice under the Clinton administration, citing doubts about the legality under U.S. antitrust laws of attempted buy-outs of other companies by Microsoft, has put such purchases on hold, causing them to fall through and, it is said, greatly angering Bill Gates. Gates: "modern-day Rockefeller" Gates, an object of admiration for most Americans as a "modern-day Rockefeller," is also, it seems, an object of envy who arouses fierce jealousy: charges are currently being brought against him for violation of antitrust laws. Simply put, the Justice Department, under the traditional notion that allowing software makers to merge with the company which makes their computer operating systems to form a single giant company is less desirable than keeping them separate, is moving to block Gates' path. Some 80% of the personal computers in the world today use the MS DOS or Windows operating systems both Microsoft products. If you purchase a piece of software, such as a word processor, and try to run it on your personal computer, you will be unable to run the program unless it is first able to connect with and operating system. Because of this judgment that it is best to keep separate that which ought to be consolidated, it is difficult to see how the Internet, or any other information network, can in future be integrated into a single, unified whole. The specter of an antitrust law born in the age of Standard Oil has risen once again to haunt us. As a rule, disputes such as this are amicably settled by lobbyists. Astoundingly, however, Bill Gates had not a single lobbyist in Washington. Absorbed in his work, it seems, he had neglected to devote any attention to lobbying activities. Then, too, his is such a new industry that it simply hadn't had time to hire lobbyists and launch a carefully planned program of lobbying activities. Thus it appears that Gates' split with the Democratic Party is a fair accomplishment. "The Road Ahead" In "The Road Ahead," a book-and-CD-ROM package, Gates "predicts the future for you" (as Newsweek's cover put it). And, surprise!, things look bright indeed to America's richest guy. The "information highway" -- Gates generally clips it to a plain "the highway" -- isn't here yet; the Internet is only a genetic precursor, according to Gates. But when "the highway" itself arrives at our doors, with its ubiquitous high-bandwidth digital video feeds, our lives will undergo a seismic change for the better. This "World of Tomorrow" prognostication game is old enough hat that even Gates admits many of his predictions will soon look comical. The CD-ROM's video portrait of "the highway" circa 2004 -- a world of heavy makeup, bad Muzak and super-efficient cappuccino bars -- will make for good party entertainment a decade hence. So will its wide-eyed virtual-reality walk-through of the still-unfinished Gates mansion, the Hearst Castle of the '90s. "The Road Ahead," like an AT&T ad, is built around a ritual repetition of the word "will." I used the CD-ROM's "full text search" function and, though it wouldn't tell me how many times "will" appears, it reported that the word turns up on just about every page. You will use "the highway" to "shop, order food, contact fellow hobbyists, or publish information for others to use." You will select how, when and where you wish to receive your news and entertainment. You will benefit from lower prices and the elimination of middlemen that the network's "friction-free" marketplace allows. Your wallet PC will identify you at airport gates and highway tollbooths. Your children will tap a torrent of homework helpers. As the CD-ROM narrator breathlessly puts it, "The information flow into your home will be incredible!" ("Get the mop, Martha!") At some point, all these "wills" change in character from predictive to prescriptive, and Gates' friendly if cool tone acquires an undercurrent of coercion. The promise of "the highway," according to Gates, is that it will allow us all to control our destinies more fully. The not-so-well-buried subtext of "The Road Ahead," though, tells a different story -- of Gates' and Microsoft's desperate struggle to maintain control of the high-tech marketplace. "The Road Ahead" won't satisfy readers curious for insights into Chairman Bill's psyche; it mostly has the bland, confident air of an annual report. But in its very first chapter -- next to a cute high-school picture of Gates and Paul Allen scrunched over an old teletype terminal -- Gates does give one clue to his mindset. He was attracted to computers as a kid, he explains, because "we could give this big machine orders and it would always obey." It's easy to jump on a line like that and make Gates out as some kind of silicon-chip Nazi. But of course he's only being honest about the attraction computer science has always held for engineers, enthusiasts and precocious children: the appeal of instantly responsive, utterly submissive systems that can be gradually massaged toward perfection. Though digital technology invites its creators into a world of absolute control, the computer market remains a place of frustrating chaos. Gates long ago adopted the strategy that made Microsoft's fortune: ship early with imperfect products, seize market share and then upgrade toward an acceptable level of performance. This drives engineers nuts, but it's sharp business, and it has kept the company on top of the software industry -- until now. Conclusion and personal ideas: William Henry Gates III, as you have read, is quite an incredible man. His intelligence and insight into the future, shows how "ahead of his time", he is. In almost all of our daily lives, (whether you know it or not), Gates has done something, influenced someone, invented some new software, that is relevant to what you do. Whether you are a news reporter, or a bagger at a grocery store, a high-tech attorney, or a low-tech gardener, it seems that not a day goes by, without some mention of technology, computers, or what's in store for us. He is quite a pioneer in his field, and has brought a new realization to many, regarding the future. In fact, his 1995, best selling book, is titled: "The Road Ahead". This man has such power over our society, and our country, that his ideas are often met with resistance. Many people believe that it is terrible that someone with ideas and goals like his, should have so much power and say in our everyday life. It is obvious to many that he tells the truth, when he talks about the future, and how he thinks it will be. Because with his economic stature, and powerful ideas, he will be able to change the world. I believe he is one of the most magnificent men in our recent history, to be compared to Hitler, Rockefeller, Martin Luther King, and many other influential people. He has influenced me personally, just with the use of computers in our everyday lives, (more in mine that others), and the majority of our U.S. population. His presence in our economy, society, and life cannot be ignored, and I believe that this will become even more evident, as we move into the 21st century. f:\12000 essays\technology & computers (295)\Windows 95 Beats Mac.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Over the years, there has been much argument over which computer platform to buy. The two contenders in theis competions have been the PC , with its Windows environment and the Macintosh. Now, with the successful release of Windows 95 for the PC, this has been the mjor argument for each side : hardware configuration, networking capabilities, and operating system. The first arguments to look at between the Pc and Mac platform has to do with hardware configuration. Before Windows 95, installing and configuring hardware was like pulling teeth. The instructions given to help install hardware were too complicated for the average user. There was also the issuer of compatibility between the large number of different hardware setups available in the PC world. Is a particular board going to work with my PC? With Windows 95, these problems were alleviated with plug and play technology. With plug and play compatible boards, the computer detects and configures the new board automatically. The operating system may recognize some hardware components on older PCs. Mac userw will claim that they always had the convenicnce of a plug and play system, ubt the difference shows in teh flexibility of the two systems. Another set of arguments Mac users use in favor of their sysstems over PCs is in multimedia and networking capabilities. Mac users gloat that the Mac has networking technology built in the system. Even if a user did not use it, the network is included with the system. They cited that for the PC users and Pc users hate the fact that they need to stick a card in their computers to communicate with any other computer. With Windows 95, the Mac network gloaters are silenced. Windows 95 included built-in network support. Any network will work properly. The Mac users also claim their systems have speech, telephony, and voice recognition, whereas the Pc user does not have. In truth, the promised building blocks for telephony control do not yet exist. I think the speech is not good point in the Mac. In the world of computer, people cannot stand still for too long without getting passed by. Windows 95 now threatens the only assets the Mac has in capturing the interests of the consumers because of configuration in the hardware, communication betweencomputers and difference of operating systems in both platforms. Almost any argument could give in defense of the Mac does not carry nearly as much bite as it did before Windows 95 arrived. Pc users have something to be proud of. f:\12000 essays\technology & computers (295)\Windows 95 or NT.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ When one asks himself or another, Which Operating system will better fill my needs as an operating system Windows 95 or Windows NT version 3.51. I will look at both operating systems and compare the qualities of each one in price, performance, stability and ease of use. The final results will give one a clear view to the superior operating system for years to come. As one already knows, that if you keep up with the computer industry, that Microsoft Windows has been around for a long time. The Majority of all PC users use some type of windows for their working environment. Microsoft has spent a great deal of time trying to make the supreme operating system. In doing so they have created two of the most debated systems available to the general public in this day and age. However, in doing so each one of these operating systems has there good side and there bad side. Windows NT 3.51 was originally created for business use, but has ended up being more widely available for the average PC user in ones home. Windows 95 was developed for the sole purpose as an alternative to Windows NT. But has ended up in the work place more then the home. Windows 95 carries an average price of ninety-five dollars in stores. Which makes it an expensive system worth the money. On the other hand Windows NT 3.51 carries a price tag of three-hundred and forty nine dollars. Making this software very expensive but also worth every penny. Windows 95 is much easier to use then Windows NT. It was designed to make the PC user have more of an easier time navigating through its complex tasks. This is one of the main reasons why people would rather buy the more less expensive operating system. Rather then the more expensive system Windows NT. Another one the reasons that Windows 95 is more popular is for its simple graphic user interface otherwise known as the GUI. Windows also carries a option that Windows NT does not carry. That option is called PnP or Plug and Play, This is where the operating system will install the hardware and new hardware that could be added at a later date in time, Windows NT does not carry this very useful feature. If one has ever tried to install a new peripheral to ones computer it can be a headache alone trying to decipher the instruction manual that comes along with the device. Windows 95 will do this on its own, one of the downfalls to it is the fact that it can be only a device that is less then six to eight months old and carries the PnP logo. Windows NT 3.51 was developed more for business application (ex. Database, Spreadsheets, word processing and programming.) In the long run though Windows NT is less susceptible to system crashes. Windows NT does not carry the same Graphic User Interface as does Windows 95. It carries more of a look of Windows 3.x, So it is a little more difficult to navigate around. Windows NT does however have the ability to Multitask ( meaning to have more then one application open at a time). Windows 95 does carries the ability to Multitask but loses a great deal of system performance in doing so. Windows NT is the best operating system that comes pre-loaded with level two government encryption standards. Windows NT was designed also to be used in a network, Windows 95 was designed to for a network, however Windows NT does a much better job of handling a network environment. Conclusion In the race for the best operating system Microsoft for sure is the leader in the Personal computer industry. Microsoft has proven that it can meet anyone's needs by releasing to different Operating systems. Each one of them having there own benefits, so when it comes down to deciding which level of computing to accomplish. One has to decide what kind of computing one will be doing, how much money does one want to invest in their software and would one be ready to take the leap into the future and upgrade f:\12000 essays\technology & computers (295)\Windows 95 Skills Checklist.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Windows 95 Skills Check Please do each of the following tasks. If you have a question on how to do any of them, do not hesitate to ask your instructor for assistance. This skills check is provided for you to measure how well you can perform the tasks that were covered in class thus far. 1. Start the computer, go into "Safe Mode" in Windows 95. 2. Shut down the computer in the correct manner. 3. Restart your computer and go into normal mode in Windows 95. 4. Open up the Taskbar Properties Sheet and check "Auto Hide." 5. Change the date and the time on your computer to read Jan 1, 2000 and 12:00 P.M. 6. Reset the date and time to the correct settings. 7. Using the Find feature in the Start Button, find the files Command.dos and Config.dos. What directories are they located in? 8. Start any five programs (sol.exe, cal.exe, clock.exe, quicken, etc.). 9. Maximize each of the started programs. 10. Minimize each of the started programs. 11. Restore each program, one at a time, using the Taskbar. 12. Close each program, one at a time, using the Taskbar. 13. Open up the Explorer. 14. Make sure the Toolbar is activated. 15. Change the View: Use all options- large icon, small icon, detail, list, etc. 16. While in the Detail View Mode, order files from any directory from top to bottom first by name, then by date, then by size. 17. Copy any file and place it on the desktop. 18. Make a short cut of any game or accessory file and place it on the desktop. 19. Make a short cut of drive A: and place it on the desktop. 20. Move the Taskbar to the right side, the top, and then return it to it's original position. 21. Open Microsoft Word 22. Minimize, Maximize and Restore Microsoft Word. 23. Close Microsoft Word. 24. Open Explorer. 25. Open up any folder that has more than ten files. 26. Using the mouse, practice selecting one file, random files, and sequential files (control and shift buttons in conjunction with the mouse). 27. Close Explorer. f:\12000 essays\technology & computers (295)\Windows 95 the O[S of the future.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Windows 95 the O/S of the future The way of the computing world is changing at a neck breaking pace. People are looking for computers to be easy to use, and to make life easier for them. The computer manufactures and software developers have started to tailor computers and programs to fit the needs of the new "computer age". Graphical Interface Software (GUI) began to make computing easier and people who never dreamed of owning computers began to buy them. Macintosh was one of the first GUI computers to hit the market, but it was not IBM compatible, so it did not take over the mainstream of the computer industry. Since most computers where being make to fit the IBM compatible standards, Microsoft saw the need to replace DOS (Disk Operating System) with something easier to use. That is when they developed Windows, which covered the difficult to use DOS with a new face that made computing easier. The first Windows was a start in the right direction. In an effort to make computing meet the needs of the public, Microsoft developed Windows 95. Windows 95 has the appearance of being a completely user friendly operating system and it pretty much is as far as the average user is concerned. The compatibility with most hardware makes it easy for someone to upgrade their computer. The desktop is designed so the user has point and click access to all their open and closed programs. Utilizing the 32 bit programing it was written with, users are able to work with more than one program at a time and move information between programs. This gives the user the freedom they need to begin to explore the world of computing without having to learn all the "computer stuff". Today everyone wants the fastest computer with the best monitor and fastest modem this was an interrupt address nightmare until Windows 95 was developed. People didn't know what jumpers needed to go where to make their hardware work. Or why their CDROM wouldn't work since they changed their sound board. Most hardware periphials have all the configurations built into a chip that communicates with Windows 95 to find out where it needs to put itself in the address map. This allows users to have fancy big screen monitors and connect to the Internet with high speed modems. They can also put in faster video cards that use all the nice Windows 95 features, thus making their computing less complicated Windows 95 is set up with novice users in mind. As with Windows 3.x, it has boxes that open up with the program inside called windows. These windows are used to make computing more exciting for the user. No one wants to look at a screen with just plain text anymore. Before a window is opened, it is represented by an icon. Double clicking this icon with the mouse pointer will open the application window for the user to work in. Once the window has been opened, all visible functions of the program will be performed within it. At any time the window can be shrunk back down into an icon, or made to fit the entire screen. For all essential purposes the user has complete control over his windows. Since more than one window can be open at a time, the user can work with more than one program. Being able to work with more than one program brings out other special features of Windows 95. In a regular DOS system only one program can be open at a time. With previous versions of Windows more than one program could be open, but they did not work well together. Since Windows 95 is a 32 bit program, it manipulates memory addresses in a way that makes it look as though your programs are running simultaneously. This makes it easier to share information between programs. For example (I run Windows 95) while I am writing this paper using a word processor, I am logged onto the Internet and have five different programs running. I can move information from the Internet, or any other open program, into this paper without stopping anything else, something entirely impossible in DOS. Some people think the because they never see DOS anymore, it is not there. This could not be farther from the truth. DOS is alive and well hidden under the Windows 95 curtain. But unless the user wants to use DOS, there is no reason to even bother it. In Windows 95, DOS (version 7) has a few added goodies the some users enjoy. The biggest one is being able to open Windows applications by typing the program file name at the DOS prompt. Another one is being able to run more than one DOS application at a time. This does not work as well as with Windows applications, but it has similar effect. DOS can be used alone, outside of Windows 95, as before. Or it can be opened in a window on the desktop like a normal Windows program, and can be manipulated in size and style. The desktop is where the icons and windows we discussed before live. In older versions of Windows the icons lived in the Program Manager. In Windows 95 they live under the Start button. Once the start button is clicked, it displays a pop up windows. Moving the mouse pointer in the pop up windows gives you access to the different programs available. Icons can also be moved onto the desktop itself, these are called shortcuts. Double clicking a shortcut will open the program the shortcut represents. Shortcuts can be linked to a program or a file, and can be moved to any position on the desktop the user likes. You can also change the picture of the icon to any "Icon" picture you have available. The desktop can be fashioned in any way the user likes. For example colors and background pictures can be changed. Even the colors and thickness of the window outlines and menus can be changed. While programs are open on the desktop, they are displayed on the Task Bar at the bottom of the screen as buttons. One option with the task bar is that it may be moved to any of the four sides of the screen. The buttons have a picture and word identifier on them so the user knows which button is for which program. Clicking once on the button will switch to the program represented, which makes it easier to switch between more than one program. This just about gives the user total control over his computer, which is what most users want. The ease of use is what makes Windows 95 appealing to the "modern" computer user. In time Microsoft will improve on the reliability of Windows 95, making it easier to work with. Being the most complete and user friendly IBM compatible operating system on the market, I feel that Windows 95 will be the dominant operating system for several years to come. f:\12000 essays\technology & computers (295)\Windows 95.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Windows 95 may very well be the most talked about software release in history. With more people than ever using personal computers, and given Microsoft's dominance in this still growing market, Mr. Gates' newest offering has caused quite a stir. As with any new product in this ultra-competitive industry, Windows 95 has come under intense scrutiny. Advocates of the new operating system applaud its new features and usability, while its opponents talk about the similarities to Apple's operating system. As I have never used an Apple computer, I can't address this point, but I will attempt to outline some of the more interesting "new" features of Windows 95. Arguably the most welcome innovation Win 95 offers is the "task bar". Use of the task bar eliminates the need to navigate through several open application windows to get to the one you need. When you first start an application, a corresponding button appears on the task bar. If after opening other windows you need to return to the original window, all you need do is click on the application's button on the task bar and the appropriate window will come to the fore. According to Aley, "the most gratifying, and overdue, improvement is Windows 95's tolerance for file names in plain English" (29-30). Traditionally, users had to think of file names that summed up their work in eight letters or less. This was a constant problem because frequently a user would look at a list of files to retrieve and think "now what did I save that as?". Those days are over. Windows 95 will let the user save his or her work with names like "New Speech" or "Inventory Spreadsheet No. 1", making the contents of those files obvious. Much to the annoyance of software developers, Windows 95 incorporates many features that previously required add-on software. One such feature is the Briefcase- a program for synchronizing the information stored on a user's desktop and notebook computers. Keeping track of which files were the most recently updated was a big problem. As Aley puts it, "Which copy of your speech for the sales conference did you work on last, the one in the laptop or the one in the desktop?" (29-30). One solution was to use programs like Laplink which would analyze which copy of a file was updated last. Now that Windows 95 provides this utility, there is no need to buy the add-on software. While mice have always come with two or even three buttons, most programs have only provided for the use of the left. With Windows 95 there is finally a use for the right. "Clicking it calls up a menu of commands that pertain to whatever the cursor is pointing at"(Aley 29-30). Clicking on the background will open a window that will allow you to change the screen savers and wallpaper. Clicking on an icon that represents a disk drive will bring up statistics about that drive. To use Aley's words, "Windows 95 is still clearly a work in progress" (29-30). The software included to let a user connect to The Microsoft Network cannot be used yet because there is no Microsoft Network. The dream of plug-and-play compatibility for pc's has not yet been realized, although in fairness, part of the responsibility for that lies with hardware manufacturers. However, even with these drawbacks, Windows 95 offers many much needed and useful new features. Works Cited Aley, James. "Windows 95 and Your PC." Fortune 3 Apr. 1995: 29-30. James Connell 1 f:\12000 essays\technology & computers (295)\Windows NT vs Unix as an operating system.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Evolution & Development History In the late 1960s a combined project between researchers at MIT, Bell Labs and General Electric led to the design of a third generation of computer operating system known as MULTICS (MULTiplexed Information and Computing Service). It was envisaged as a computer utility, a machine that would support hundreds of simultaneous timesharing users. They envisaged one huge machine providing computing power for everyone in Boston. The idea that machines as powerful as their GE-645 would be sold as personal computers costing only a few thousand dollars only 20 years later would have seemed like science fiction to them. However MULTICS proved more difficult than imagined to implement and Bell Labs withdrew from the project in 1969 as did General Electric, dropping out of the computer business altogether. One of the Bell Labs researchers (Ken Thompson) then decided to rewrite a stripped down version of MULTICS, initially as a hobby. He used a PDP-7 minicomputer that no was using and wrote the code in assembly. It was initially a stripped down, single user version of MULTICS but Thompson actually got the system to work and one of his colleagues jokingly called it UNICS (UNiplexed Information and Computing Service). The name stuck but the spelling was later changed to UNIX. Soon Thompson was joined on the project by Dennis Richie and later by his entire department. UNIX was moved from the now obsolete PDP-7 to the much more modern PDP-11/20 and then later to the PDP-11/45 and PDP-11/70. These two latter computers had large memories as well as memory protection hardware, making it possible to support multiple users at the same time. Thompson then decided to rewrite UNIX in a high-level language called B. Unfortunately this attempt was not successful and Richie designed a successor to B called C. Together, Thompson and Richie rewrote UNIX in C and subsequently C has dominated system programming ever since. In 1974, Thompson and Richie published a paper about UNIX and this publication stimulated many universities to ask Bell Labs for a copy of UNIX. As it happened the PDP-11 was the computer of choice at nearly all university computer science departments and the operating systems that came with this computer was widely regarded as being dreadful and hence UNIX quickly came to replace them. The version that first became the standard in universities was Version 6 and within a few years this was replaced by Version 7. By the mid 1980s, UNIX was in widespread use on minicomputers and engineering workstations from a variety of vendors. In 1984, AT&T released the first commercial version of UNIX, System III, based on Version 7. Over a number of years this was improved and upgraded to System V. Meanwhile the University of California at Berkeley modified the original Version 6 substantially. They called their version 1BSD (First Berkeley Software Distribution). This was modified over time to 4BSD and improvements were made such as the use of paging, file names longer than 14 characters and a new networking protocol, TCP/IP. Some computer vendors like DEC and Sun Microsystems based their version of UNIX on Berkeley's rather than AT&T's. There was a few attempts to standardise UNIX in the late 1980s, but only the POSIX committee had any real success, and this was limited. During the 1980s, most computing environments became much more heterogeneous, and customers began to ask for greater application portability and interoperability from systems and software vendors. Many customers turned to UNIX to help address those concerns and systems vendors gradually began to offer commercial UNIX-based systems. UNIX was a portable operating system whose source could easily be licensed, and it had already established a reputation and a small but loyal customer base among R&D organisations and universities. Most vendors licensed source bases from either the University of California at Berkeley or AT&T(r) (two completely different source bases). Licensees extensively modified the source and tightly coupled them to their own systems architectures to produce as many as 100 proprietary UNIX variants. Most of these systems were (and still are) neither source nor binary compatible with one another, and most are hardware specific. With the emergence of RISC technology and the breakup of AT&T, the UNIX systems category began to grow significantly during the 1980s. The term "open systems" was coined. Customers began demanding better portability and interoperability between the many incompatible UNIX variants. Over the years, a variety of coalitions (e.g. UNIX International) were formed to try to gain control over and consolidate the UNIX systems category, but their success was always limited. Gradually, the industry turned to standards as a way of achieving the portability and interoperability benefits that customers wanted. However, UNIX standards and standards organisations proliferated (just as vendor coalitions had), resulting in more confusion and aggravation for UNIX customers. The UNIX systems category is primarily an application-driven systems category, not an operating systems category. Customers choose an application first-for example, a high-end CAD package-then find out which different systems it runs on, and select one. The final selection involves a variety of criteria, such as price/performance, service, and support. Customers generally don't choose UNIX itself, or which UNIX variant they want. UNIX just comes with the package when they buy a system to run their chosen applications. The UNIX category can be divided into technical and business markets: 87% of technical UNIX systems purchased are RISC workstations purchased to run specific technical applications; 74% of business UNIX systems sold are multiuser/server/midrange systems, primarily for running line-of-business or vertical market applications. The UNIX systems category is extremely fragmented. Only two vendors have more than a 10% share of UNIX variant license shipments (Sun(r) and SCO); 12 of the top 15 vendors have shares of 5% or less (based on actual 1991 unit shipments, source: IDC). This fragmentation reflects the fact that most customers who end up buying UNIX are not actually choosing UNIX itself, so most UNIX variants have small and not very committed customer bases. Operating System Architecture Windows NT was designed with the goal of maintaining compatibility with applications written for MS-DOS, Windows for MS-DOS, OS/2, and POSIX. This was an ambitious goal, because it meant that Windows NT would have to provide the applications with the application programming interfaces (API) and the execution environments that their native operating systems would normally provide. The Windows NT developers accomplished their compatibility goal by implementing a suite of operating system environment emulators, called environment subsystems. The emulators form an intermediate layer between user applications and the underlying NT operating system core. User applications and environment subsystems work together in a client/server relationship. Each environment subsystem acts as a server that supports the application programming interfaces of a different operating system . Each user application acts as the client of an environment subsystem because it uses the application programming interface provided by the subsystem. Client applications and environment subsystem servers communicate with each other using a message-based protocol. At the core of the Windows NT operating system is a collection of operating system components called the NT Executive. The executive's components work together to form a highly sophisticated, general purpose operating system. They provide mechanisms for: Interprocess communication. Pre-emptive multitasking. Symmetric multiprocessing. Virtual memory management. Device Input/Output. Security. Each component of the executive provides a set of functions, commonly referred to as native services or executive services. Collectively, these services form the application programming interface (API) of the NT executive. Environment subsystems are applications that call NT executive services. Each one emulates a different operating system environment. For example, the OS/2 environment subsystem supports all of the application programming interface functions used by OS/2 character mode applications. It provides these applications with an execution environment that looks and acts like a native OS/2 system. Internally, environment subsystems call NT executive services to do most of their work. The NT executive services provide general-purpose mechanisms for doing most operating system tasks. However the subsystems must implement any features that are unique to the their operating system environments. User applications, like environment subsystems, are run on the NT Executive. Unlike environment subsystems, user applications do not directly call executive services. Instead, they call application programming interfaces provided by the environment subsystems. The subsystems then call executive services as needed to implement their application programming interface functions. Windows NT presents users with an interface that looks like that of Windows 3.1. This user interface is provided by Windows NT's 32-bit Windows subsystem (Win32). The Win32 subsystem has exclusive responsibility for displaying output on the system's monitor and managing user input. Architecturally, this means that the other environment subsystems must call Win32 subsystem functions to produce output on the display. It also means that the Win32 subsystem must pass user input actions to the other environment subsystems when the user interacts with their windows. Windows NT does not maintain compatibility with device drivers written for MS-DOS or Windows for MS-DOS. Instead, it adopts a new layered device-driver architecture that provides many advantages in terms of flexibility, maintainability, and portability. Windows NT's device driver architecture requires that new drivers be written before Windows NT can be compatible with existing hardware. While writing new drivers involves a lot of development effort on the part of Microsoft and independent hardware vendors (IHV), most of the hardware devices supported by Windows for MS-DOS will be supported by new drivers shipped with the final Windows NT product. The device driver architecture is modular in design. It allows big (monolithic) device drivers to be broken up into layers of smaller independent device drivers. A driver that provides common functionality must only be written once. Drivers in adjacent layers can then simply call the common device driver to get their work done. Adding support for new devices is easier under Windows NT than most operating systems because only the hardware-specific drivers need to be rewritten. Windows NT's new device driver architecture provides a structure on top of which compatibility with existing installable file systems (for example, FAT and HPFS) and existing networks (for example, Novell and Banyan Vines) was relatively easy to achieve. File systems and network redirectors are implemented as layered drivers that plug easily into the new Windows NT device driver architecture. In any Windows NT multiprocessor platform, the following conditions must hold: All CPUs are identical, and either all have identical coprocessors or none has a coprocessor. All CPUs share memory and have uniform access to memory. In a symmetric platform, every CPU can access memory, take an interrupt, and access I/O control registers. In an asymmetric platform, one CPU takes all interrupts for a set of slave CPUs. Windows NT is designed to run unchanged on uniprocessor and symmetric multiprocessor platforms A UNIX system can be regarded as hierarchical in nature. At the highest level is the physical hardware, consisting of the CPU or CPUs, memory and disk storage, terminals and other devices. On the next layer is the UNIX operating system itself. The function of the operating system is to allow access to and control the hardware and provide an interface that other software can use to access the hardware resources within the machine, without having to have complete knowledge of what the machine contains. These system calls allow user programs to create and manage processes, files and other resources. Programs make system calls by loading arguments into memory registers and then issuing trap instructions to switch from user mode to kernel mode to start up UNIX. Since there is no way to trap instructions in C, a standard library is provided on top of the operating system, with one procedure per system call. The next layer consists of the standard utility programs, such as the shell, editors, compilers, etc., and it is these programs that a user at a terminal invokes. They use the operating system to access the hardware to perform their functions and generally are able to run on different hardware configurations without specific knowledge of them. There are two main parts to the UNIX kernel which are more or less distinguishable. At the lowest level is the machine dependent kernel. This is a piece of code which consists of the interrupt handlers, the low-level I/O system device drivers and some of the memory management software. As with most of the Unix operating system it is mostly written in C, but since it interacts directly with the machine and processor specific hardware, it has to be rewritten from scratch whenever UNIX is ported to a new machine. This kernel uses the lowest level machine instructions for the processor which is why it must be changed for each different processor. In contrast, the machine independent kernel runs the same on all machine types because it is not as closely reliant on any specific piece of hardware it is running on. The machine independent code includes system call handling, process management, scheduling, pipes, signals, memory paging and memory swapping functions, the file system and the higher level part of the I/O system. The machine independent part of the kernel is by far the larger of the two sections, which is why it UNIX can be ported to new hardware with relative ease. Unix does not use the DOS and Windows idea of independently loaded device drivers for each additional hardware item that is not under BIOS control in the machine which is why it must be recompiled whenever hardware is added or removed, the kernel needing to be updated with the new information. This is the equivalent of adding a device driver to a configuration file in DOS or Windows and then rebooting the machine. It is however a longer process to undertake. Memory Management Windows NT provides a flat 32-bit address space, half of which is reserved for the OS, and half available to the process. This provides a separate 2 gigabytes of demand-paged virtual memory per process. This memory is accessible to the software developer through the usual malloc() and free() memory allocation and deallocation routines, as well as some advanced Windows NT-specific mechanisms. For a programmer desiring greater functionality for memory control, Windows NT also provides Virtual and Heap memory management APIs. The advantage of using the virtual memory programming interface (VirtualAlloc(), VirtualLock(), VirtualQuery(), etc.) is that the developer has much more control over whether backing store (memory committed in the paging (swap) file to handle physical memory overcommitment) is explicitly marked, and removed from the available pool of free blocks. With malloc(), every call is assumed to require the memory to be available upon return from the function call to be used. With VirtualAlloc() and related functions, the memory is reserved, but not committed, until the page on which an access occurs is touched. By allowing the application to control the commitment policy through access, less system resources are used. The trade-off is that the application must also be able to handle the condition (presumably with structured exception handling) of an actual memory access forcing commitment. Heap APIs are provided to make life easier for applications with memory-using stack discipline. Multiple heaps can be initialised, each growing/shrinking with subsequent accesses. Synchronisation of access to allocated heaps can be done either explicitly through Windows NT synchronisation objects, or by using an appropriate parameter at the creation of a heap. All access to memory in that particular heap is synchronised between threads in the process. Memory-mapped files are also provided in Windows NT. This provides a convenient way to access disk data as memory, with the Windows NT kernel managing paging. This memory may be shared between processes by using CreateFileMapping() followed by MapViewOfFile(). Windows NT provides thread local storage (TLS) to accommodate the needs of multithreaded applications. Each thread of a subprocess has its own stack, and may have its own memory to keep various information. Windows NT is the first operating system to provide a consistent multithreading API across multiple platforms. A thread is unit of execution in a process context that shares a global memory state with other threads in that context (if any). When a process is created in Windows NT, memory is allocated for it, a state is set up in the system, and a thread object is created. To start a thread in a currently executing process, the CreateThread() call is used as a function pointer is passed in through lpStartAddr; this address may be any valid procedure address in an application. Windows NT supports a number of different types of multiprocessing hardware. On these designs, it's possible for different processors to be running different threads an application simultaneously. Take care to use threads in an application to synchronise access to common resources between threads. Fortunately, Windows NT has very rich synchronisation facilities. Most UNIX developers don't use threads in their applications since support is not consistent between UNIX platforms. Handles don't have a direct mapping from UNIX; however, they're very important to Win32 applications and deserve discussion. When kernel objects (such as threads, processes, files, semaphores, mutexes, events, pipes, mailslots, and communications devices) are created or opened using the Win32 API, a HANDLE is returned. This handle is a 32-bit quantity that is an index into a handle table specific to that process. Handles have associated ACLs, or Access Control Lists, that Windows NT uses to check against the security credentials of the process. Handles can be obtained by explicitly creating them (usually when an object is created), as the result of an open operation (e.g. OpenEvent()) on a named object in the system, inherited as the result of a CreateProcess() operation (a child process inherits an open handle from its parent process if inheritance was specified when the original handle was created and if the child process was created with the "inherit handles" flag set), or "given away" by DuplicateHandle(). It is important to note that unless one of these mechanisms is used, a handle will be meaningless in the context of a process. For example, suppose process 1 calls CreateEvent() to return a handle that happens to have the ordinal value 0x1FFE. This event will be used to co-ordinate an operation between different processes. Process 2 must somehow get a handle to the event that process 1 created. If process 1 somehow "conjures" that the right value to use is 0x1FFE, it still won't have access to the event created by process 1, since that handle value means nothing in the context of process 2. If instead, process 1 calls DuplicateHandle()with the handle of process 2 (acquired through calling OpenProcess() with the integral id of process 2), a handle that can be used by process 2 is created. This handle value can be communicated back to process 1 through some IPC mechanism. Handles that are used for synchronisation (semaphores, mutexes, events) as well as those that may be involved in asynchronous I/O (named pipes, files, communications) may be used with WaitForObject() and WaitForMultipleObject(), which are functionally similar to the select() call in UNIX. Prior to 3BSD most UNIX systems were based on swapping. When more processes existed than could be kept in physical memory, some of them were swapped out to disk or drum storage. A swapped out process was always swapped out in its entirety and hence any current process was always either in memory or on disk as a complete unit. All movement between memory and disk was handled by the upper level of a split level scheduler, known as the (memory) swapper. Swapping from memory to disk was initiated when the kernel ran out of free physical memory. In order to choose a victim to evict, the swapper would first look at the processes that were being blocked by having to wait for something such as terminal input or a print job to respond. If more than one process was found, that process whose priority plus residence time was the highest was chosen as a candidate for swapping to disk. Thus a process that had consumed a large amount of CPU time recently was a good candidate, as was one that had been in memory a long time, even if it was mostly doing I/O. If no blocked process was available in memory then a ready process was chosen based on the same criteria of priority plus residence time. Starting with 3BSD, memory paging was added to the operating system to handle the ever larger programs that were being written. Both 4BSD and System V implemented demand paging in a similar fashion. The theory of demand paging is that a process need not necessarily be entirely resident in memory in order to continue execution. All that is actually required is the user structure and the page tables. If these are swapped into memory, the process is then deemed to be sufficiently in memory and can be scheduled to execute. The pages of the text, data and stack segments are brought in dynamically, one at a time, as they are referenced, thus leaving memory free for other tasks rather than filling it with tables of data which may be referenced only once. If the user structure and page table are not in memory, the process cannot be executed until the swapper swaps them into memory from disk. Paging is implemented partly by the main kernel and partly by a process called the page daemon. Like all daemons, the page daemon is started up periodically so that it can look around to see if there is any work for it to do. If it discovers that the number of free pages in memory is too low, it initiates action to free up more pages. When a process is started it may cause a page fault due to one of its pages is not being resident in memory. When a page fault occurs, the operating system takes the first page frame free on the list of page frames, removes it from the list and reads the needed page into it. If the free page frame list is empty, the process must be suspended until the page daemon has had time to free a page frame from another process. The page replacement algorithm is executed by the page daemon. At a set interval (commonly 250 millisec but varying from system to system) it is activated to see if the number of free page frames is at least equal to a system parameter known as lotsfree (typically set to 1/4 of memory). If there are insufficient page frames, the page daemon will start transferring pages from memory to disk until the lotsfree parameter value of page frames are available. Alternatively, if the page daemon discovers that more than lotsfree page frames are on the free list, it has no need to perform any function and terminates until its next call by the system. If the machine has plenty of memory and few active processes, it will be inactive for most of the time. The page daemon uses a modified version of the clock algorithm. It is a global algorithm, which means that when removing a page it does not take into account whose page is being removed. Thus the number of pages each process has assigned to it varies in time, depending both on its own requirements and other process requirements. The size of the data segment may vary depending upon what has been requested, the operating system tracking allocated and unallocated memory blocks while the memalloc function manages the content of the data segment. Process Management, Inter-process Communication and Control The Windows NT process model differs from that of UNIX in a number of aspects, including process groups, terminal groups, setuid, memory layout, etc. For some programs, such as shells, a re-architecture of certain portions of the code is inevitable. Fortunately, most applications don't inherently rely on the specific semantics of UNIX processes, since even this differs between UNIX versions. Quoting from the online help provided with the Windows NT SDK: Win32 exposes processes and threads of execution within a process as objects. Functions exist to create, manipulate, and delete these objects. A process object represents a virtual address space, a security profile, a set of threads that execute in the address space of the process, and a set of resources or objects visible to all threads executing in the process. A thread object is the agent that executes program code (and has its own stack and machine state). Each thread is associated with a process object which specifies the virtual address space mapping for the thread. Several thread objects can be associated with a single process object which enables concurrent execution of multiple threads in a single address space (possible simultaneous execution in a multiprocessor system running Windows NT). On multiprocessor systems running Windows NT, multiple threads may execute at the same time but on different processors. In order to support the process structure of Windows NT, APIs include: * Support for process and thread creation and manipulation. * Support for synchronisation between threads within a process and synchronisation objects that can be shared by multiple processes to allow synchronisation between threads whose processes have access to the synchronisation objects. * A uniform sharing mechanism that provides security features that limit/control the sharing of objects between processes. Windows NT provides the ability to create new processes (CreateProcess) and threads (CreateThread). Rather than "inherit" everything always, as is done in UNIX with the fork call, CreateProcess accepts explicit arguments that control aspects of process creation such as file handle inheritance, security attributes, debugging of the child process, environment, default directory, etc. It is through the explicit creation of a thread or process with appropriate security descriptors that credentials are granted to the created entity. Win32 does not provide the capability to "clone" a running process (and it's associated in-memory contents); this is not such a hardship, since most UNIX code forks and then immediately calls exec. Applications that depend on the cloning semantics of fork may have to be rearchitected a bit to use threads (especially where large amounts of data sharing between parent and child occurs), or in some cases, to use IPC mechanisms to copy the relevant data between two distinct processes after the CreateProcess call is executed. If a child process is to inherit the handles of the creator process, the bInherit flag of the CreateProcess call can be set. In this case, the child's handle table is filled in with handles valid in the context of the child process. If this flag is not specified, handles must be given away by using the DuplicateHandlecall. Windows NT was not designed to support "dumb terminals" as a primary emphasis, so the concept of terminal process groups and associated semantics are not implemented. Applications making assumptions about groups of applications (for example, killing the parent process kills all child processes), will have to investigate the GenerateConsoleCtrlEvent API, which provides a mechanism to signal groups of applications controlled by a parent process using the CREATE_NEW_PROCESS_GROUP flag in the CreateProcess API. Programs making assumptions about the layout of processes in memory (GNU EMACS, for example, which executes, then "dumps" the image of variables in memory to disk, which is subsequently "overlayed" on start-up to reduce initialisation time), especially the relationship of code segments to data and stack, will likely require modification. Generally, practices such as these are used to get around some operating system limitation or restriction. At this level, a rethinking of the structure of that part of the application is generally in order, to examine supported alternatives to the "hack" that was used (perhaps memory mapped files for particular cases like this). For those who must deal with an application's pages on this level, there is a mechanism by which a process may be opened (OpenProcess), and individual memory pages, threads, and stacks examined or modified. There is no direct equivalent of the UNIX setuid. There are, however, a number of Windows NT alternatives to use depending on the task to be accomplished. If the task at hand is a daemon that runs with a fixed user context, it would be best to use a Windows NT service (again, the online help is invaluable for this information). A Windows NT service is equivalent to a "daemon" running with fixed user credentials, with the added benefit of being administrable locally or remotely through standard Windows NT administration facilities. For instances when a process must "impersonate" a particular user, it's suggested that a server program be written that communicates thr f:\12000 essays\technology & computers (295)\Windows NT.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Once a small and simple collection of computers run by the Defence Department, is now a massive world wide network of computers, what we call the 'Internet'. The word "Internet" literally means "network of networks." In itself, the Internet is composed of thousands of smaller local networks scattered throughout the globe. It connects roughly 15 million users in more than 50 countries a day. The World Wide Web (WWW) is mostly used on the Internet. The Web refers to a body of information, while the Internet refers to the physical side of the global network containing a large amount of cables and computers. The Internet is a 'packet-switching' computer network. When a person sends a message over the Internet, it is broken into tiny pieces, called 'packets'. These packets travel over many different routes between the computer that it is being sent from to the computer to which it is being sent to. Phone lines, either fibre-optics or copper wires ones, carry most of the data packets. Internet computers along the path switch each packet that will take it to its destination, but no two packets need to follow the same path. The Internet is designed so that packets always take the best available route at the time they are travelling. 'Routers' which are boxes of circuit boards and microchips, which do the essential task of directing and redirecting packets along the network. Much smaller boxes of circuit boards and microchips called 'modems' do the task of interpreting between the phone lines and the computer. The packets are all switched into a destination and reassembled by the destination computer. Today's Internet contains enough repetitious and interconnected circuits simply to reroute the data if any portion of the network goes down or gets overloaded. The packet-switching nature of the Internet gives it sufficient speed and flexibility to support real-time communication, such as sending messages to other people in a chat environment (IRC). Every packet is written in a particular protocol language, called TCP/IP, which stands for Transmission Control Protocol/Internetworking Protocol. This protocol is the common language of the Internet, and it supports two major programs called File Transfer Protocol (FTP) and Telenet. FTP lets the transfer files from one Internet computer to another. Telnet lets a person to log into a remote computer. They have combined these two tools in complex ways to create the Internet tools such as Gopher, the World Wide Web and IRC. Some collections of phone lines and routers are larger and more powerful than others. Spirit and MCI both have each built collections of phone lines and routers that crisscross the United States and can carry large amounts of data. There are six companies in the US with large, nationwide networks of high-speed phone lines and routers. These companies include, MCI, Sprint, AGIS, UUNet/AlterNet, ANS, and PSI. They make up what they often call the 'Internet Backbone'. Data packets travelling on a 'backbone' network stay within that network for much of their journey. The reason is that there is only a handful of places where the backbone networks meet. For example, 1a packet travelling on a Sprint circuit to a Sprint router, can only transfer to an MCI circuit at certain places. This is just like how certain city streets often run parrel to each other for many miles before reaching an intersection. These intersections that they call 'Network Access Points' (NAP) are very crucial to the transmission of data on the Internet. A Web is a program running on a computer who's only purpose is to serve documents to other computers when asked. A Web client is a program that interfaces (talks) with the user and requests documents from a server as the user requests them. The server only operates when a request for a document is made. The process of how this work is very simple, one example is; Running a Web browser, the user selects a piece of hypertext connected to another text -"Planes." The Web client connects to a computer specified by a network address somewhere on the Internet and asks that computer's Web server for "Planes." The server responses by sending the text and any other media within the text (this includes pictures, sounds, movies) to the users screen. The World Wide Web does thousands of these transactions per hour throughout the wold, creating a web of information. They call the language that the Web client and servers use to talk with each other the 'Hypertext Transmission Protocol' (HTTP). All Web clients and servers must be able to speak HTTP to send and receive hypermedia documents. The standard language the Web uses for creating and recognizing hypermedia documents is the 'Hypertext Markup Language' (HTML). Another formatting language used for Web documents is 'Standard Generalized Markup Language' (SGML). HTML is widely liked because of its ease of use. Web documents are usually written in HTML and are usually named with the suffix '. html'. HTML documents are nothing more than standard 7-bit ASCII files with formatting codes that contain information about the layout (text styles, document titles, paragraphs, lists and hyperlinks). Hyperlinks are links in the document to go to other documents or another Web sight. HTML uses what they call 'Uniform Resource Locators' (URL) to represent hypermedia links and links to network services within documents. The first part of the URL (before the two slashes) specifies the method of access. The second is typically the address of the computer where the data or service is found. Further parts may specify the name of files, the port to connect to, or the text to search for in a database. Most Web browsers allow the user to specify a URL and connect to that document or service. When selecting hypertext in an HTML document, the user is actually sending a request to open a URL. In this way, they can make hyperlinks not only to other texts and media, but also to other network services. The powerful, sophisticated access that the Internet provides is truly amazing. It is spreading faster than cellular phones, and fax machines. The amount of people connecting to the Internet is growing at a rapid rate, along with the number of "host" machines with direct connection to TCP\IP. The main reason that the Internet is flourishing so rapidly is because of the freedom, there is no one who actually owns the Internet and no rules for users. As the Internet grows, many new activities are joining in, like 'Internet Radio', which will support real-time call-in shows and music to be sent over the Internet. As the Internet is expanding into another decade, it will become even more interesting and complex. FOOTNOTES: 1.John Quarterman, The Matrix: Computer Networks and Conferencing Systems Worldwide (Bedford, MA: Digital Press, 1990), 42. f:\12000 essays\technology & computers (295)\Windows revealed.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ To view Internet.txt on screen in Notepad, maximize the Notepad window. To print Internet.txt, open it in Notepad, or another word processor, and then use the Print command on the File menu. -------- CONTENTS -------- Why Is the Internet So Popular? How Can I Connect to the Internet? Notes Online Services--Where You Can Get MSIE.exe =============================== Why is the Internet So Popular? =============================== The Internet is a rich source of online information, covering almost any topic you can imagine. When you are connected to the Internet, you can: - Exchange messages with people all over the world. - Get the latest news, weather, sports, and entertainment information. - Download software, including games, pictures, and programs. - Join discussion groups, such as bulletin boards and newsgroups. ================================== How Can I Connect to the Internet? ================================== -- If you do not currently have an Internet account, sign up for The Microsoft Network (MSN) by double-clicking the MSN icon on your desktop and then following the instructions. The Microsoft Network includes access to the Internet as part of its service. -- If you have an account with an online service (see list below), or use bulletin board services (BBS) regularly, do one of the following: a) Sign up for The Microsoft Network by double-clicking the MSN icon on your desktop. b) Download The Microsoft Internet Explorer files (MSIE.exe) from your online service (see full list of locations below). -- If you already have an account with an Internet access provider, you can download Microsoft's browsing tool, Internet Explorer, from http:\www.microsoft.com. Besides being easy to use, Internet Explorer enables you to create desktop shortcuts to your favorite Web sites. Try it out! ===== Notes ===== If you do not see the MSN icon on your desktop, you can install it by opening Control Panel and then double-clicking the Add/Remove Programs icon. Then, click the Windows Setup tab. If you have Microsoft Plus!, you already have Internet Explorer. =========================================== Online Services--Where You Can Get MSIE.exe =========================================== On the Internet (ftp://ftp.microsoft.com/PerOpSys/Win_News/ On the World Wide Web http://www.microsoft.com/ On The Microsoft Network From Main Menu: Categories\Computers and Software\Software\ Microsoft\Windows 95 On CompuServe type GO WINNEWS On Prodigy JUMP WINNEWS On America Online Use keyword WINNEWS On GEnie MOVE TO PAGE 95 f:\12000 essays\technology & computers (295)\Wire Pirates.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Wire Pirates Someday the Internet may become an information superhighway, but right now it is more like a 19th-century railroad that passes through the badlands of the Old West. As waves of new settlers flock to cyberspace in search for free information or commercial opportunity, they make easy marks for sharpers who play a keyboard as deftly as Billy the Kid ever drew a six-gun. It is difficult even for those who ply it every day to appreciate how much the Internet depends on collegial trust and mutual forbearance. The 30,000 interconnected computer networks and 2.5 million or more attached computers that make up the system swap gigabytes of information based on nothing more than a digital handshake with a stranger. Electronic impersonators can commit slander or solicit criminal acts in someone else´s name; they can even masquerade as a trusted colleague to convince someone to reveal sensitive personal or business information. "It´s like the Wild West", says Donn B. Parker of SRI: "No laws, rapid growth and enterprise - it´s shoot first or be killed." To understand how the Internet, on which so many base their hopes for education, profit and international competitiveness, came to this pass, it can be instructive to look at the security record of other parts of the international communications infrastructure. The first, biggest error that designers seem to repeat is adoption of the "security through obscurity" strategy. Time and again, attempts to keep a system safe by keeping its vulnerabilities secret have failed. Consider, for example, the running war between AT&T and the phone phreaks. When hostilities began in the 1960s, phreaks could manipulate with relative ease the long-distance network in order to make unpaid telephone calls by playing certain tones into the receiver. One phreak, John Draper, was known as "Captain Crunch" for his discovery that a modified cereal-box whistle could make the 2,600-hertz tone required to unlock a trunk line. The next generation of security were the telephone credit cards. When the cards were first introduced, credit card consisted of a sequence of digits (usually area code, number and billing office code) followed by a "check digit" that depended on the other digits. Operators could easily perform the math to determine whether a particular credit-card number was valid. But also phreaks could easily figure out how to generate the proper check digit for any given telephone number. So in 1982 AT&T finally put in place a more robust method. The corporation assigned each card four check digits (the "PIN", or personal identification number) that could not be easily be computed from the other 10. A nationwide on-line database made the numbers available to operators so that they could determine whether a card was valid. Since then, so called "shoulder surfers" haunt train stations, hotel lobbies, airline terminals and other likely places for the theft of telephone credit-card numbers. When they see a victim punching in a credit card number, they transmit it to confederates for widespread use. Kluepfel, the inventor of this system, noted ruefully that his own card was compromised one day in 1993 and used to originate more than 600 international calls in the two minutes before network-security specialists detected and canceled it. The U.S. Secret Service estimates that stolen calling cards cost long distance carriers and their customers on the order of 2.5 billion dollars a year. During the same years that telephone companies were fighting the phone phreaks, computer scientists were laying the foundations of the Internet. The very nature of Internet transmissions is based on a very collegial attitude. Data packets are forwarded along network links from one computer to another until they reach their destination. A packet may take dozen hops or more, and any of the intermediary machines can read its contents. Only a gentleman´s agreement assures the sender that the recipient and no one else will read the message. But as Internet grew, however, the character of its population began changing, and many of the newcomers had little idea of the complex social contract. Since then, the Internet´s vulnerabilities have only gotten worse. Anyone who can scrounge up a computer, a modem and $20 a month in connection fees can have a direct link to the Internet and be subject to break-ins - or launch attacks on others. The internal network of high-technology company may look much like the young Internet - dozens or even hundreds of users, all sharing information freely, making use of data stored on a few file servers, not even caring which workstation they use to accessing their files. As long as such an idyllic little pocket of cyberspace remains isolated, carefree security systems may be defensible. System administrators can even set up their network file system to export widely used file directories to "world" - allowing everyone to read them - because after all, the world ends at their corporate boundaries. It does not take much imagination to see what can happen when such a trusting environment opens its digital doors to Internet. Suddenly, "world" really means the entire globe, and "any computer on the network" means every computer on any network. Files meant to be accessible to colleagues down the hall or in another department can now be reached from Finland or Fiji. What was once a private line is now a highway open to as much traffic as it can bear. If the Internet, storehouse of wonders, is also a no-computer´s land of invisible perils, how should newcomers to cyberspace protect themselves? Security experts agree that the first layer of defense is educating users and system administrators to avoid the particularly stupid mistakes such as use no passwords at all. The next level of defense is the so called fire wall, a computer that protects internal network from intrusion. To build a fire wall you need two dedicated computers: one connected to the Internet and the other one connected to the corporation´s network. The external machine examines all incoming traffic and forwards only the "safe" packages to its internal counterpart. The internal gateway, meanwhile, accepts incoming traffic only from the external one, so that if unauthorized packets do somehow find their way to it, they cannot pass. But other people foresee an Internet made up mostly of private enclaves behind fire walls. A speaker of the government notes, "There are those who say that fire walls are evil, that they are balkanizing the Internet, but brotherly love falls on its face when millions of dollars are involved". In the meantime, the network grows, and people and businesses ent f:\12000 essays\technology & computers (295)\WorkStudy Internship.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ HuxFinn CIS497 Internship Course Project HELPDESK INTERNSHIP SUMMARY RESPONSIBILITIES : The Computer User's Support Services has several divisions; Office Automations, Networking, System Programmers, Operations, Hardware Services, Helpdesk Services, Computer Laboratories, Audio / Visual, and Switchboard. CUSS is responsible for the maintenance and development of the Purdue University Network Infra-Structure. As a division of CUSS, Helpdesk channels incoming requests for computer service from Purdue faculty and staff. The Helpdesk Technician constantly monitors campus mail, voice mail, and answers the phone. Any and all questions or problems must be responded to with promptness and courtesy. Ideally, Helpdesk resolves the problem over the phone. When this is not possible, the technician goes on site armed with appropriate software and a toolkit to assist the user with basic repair. Any questions or problems requiring a higher lever of expertise or authority are relayed to specialists within one of the other divisions and a trouble ticket is created. The Helpdesk maintains careful records of all correspondence with each customer. Each call is logged in 'Magic', a specialized database application. As problems are resolved, the solutions are recorded in an ever growing library of Helpdesk documentation. The objective in creating 'Tip Sheets' is to prevent the rework involved in frequently recurring problems. They facilitate prompt and efficient customer service. An extensive collection of reference manuals for all campus software is also maintained. PROJECTS : Nupop is a DOS-based, e-mail utility that is widely used among faculty and staff. The original developers of this software no longer support it. I took the initiative in collecting as much information as possible about Nupop so that Helpdesk could fill the gap and respond to problems. When Banner was made available to Windows, a rash of users experienced difficulty. I responded by assisting in identifying the problem and had the satisfaction of participating in its resolution. CUSS and ISCP sponsored the "Taste of Technology" open house this semester. I was proud to represent Helpdesk during the activities. Helpdesk itself is a team project. No one person can possibly answer all the questions or solve all the problems. We depend on one another as an information resource. Job activities must be coordinated to best provide quality service to our customers. The spirit of cooperation is essential. ACCOMPLISHMENTS : Majoring in ISCP lays a good foundation for the skills required in Helpdesk. Helpdesk builds on these skills. A typical day includes, jammed printers, login problems, stuck keyboards, obscure error messages, forgotten passwords, virus outbreaks, equipment moves, burnt out monitors, stuck keyboards, smeared printouts, and memory shortages. There are the more esoteric software questions; "What happened to my Word macros? How do I use Reachout? How can I print an attached document in mail? How can I be included in Distribution E? What happened to my Toolbar? How do I unformat a disk?" As you can guess, an effective Helpdesk Tech requires a goodly amount of versatility and resourcefulness. As representatives of CUSS and the University, they must remain patient and agreeable. Our callers are frequently experiencing frustration and may be irritable. The phones often ring continually. Often two or three conversations are conducted simultaneously. A good tech learns to take interruptions in stride. On Helpdesk you develop not just a repertoire of hardware and software knowledge. You develop people skills. AVERAGE WORK HOURS : Throughout the majority of the Spring 96 semester, I worked full-time at the Helpdesk. This was a 40 hour week. The Helpdesk opens for business at 7:30 AM and closes at 4:30 PM. At that time, the phone lines are forwarded to receive voice mail around the clock; 24 hours a day and 365 days a year. HARDWARE AND SOFTWARE : Every effort is made to keep the Helpdesk on a parallel with current campus technology. The Helpdesk is supplied with several PCs, a DEC terminal, and a Macintosh. The technician must also become familiar with every model of printer on campus; Panasonic, Epson, Toshiba, and Hewlett Packard. There is also exposure to printer netports, scanners, LAN cards, CD ROM, and the network infrastructure. The software includes MS Office, Excel, Powerpoint, Access, Word, Word for DOS, Lotus, FoxPro, Netscape, PC Slots, McAfee, WordPerfect, and CCMail. As the faculty uses Nupop, Banner, Labres, Reachout and SPSS; these must also be understood. Helpdesk is also experimenting with Net Remote, an application that enables remote control of any computer on the network. Windows 95 is in the introductory stages. Its release on campus is under evaluation by a committee. Netscape Version 2.01 is also being evaluated for presentation on the network. DEDICATED TRAINING CUSS keeps a library of professionally prepared videos for the new Helpdesk worker to view. These videos discuss the responsibilities and skills of the Support Service professional. Additionally, technicians are encouraged to attend any computer related seminars or lectures available on campus. The Helpdesk is a fast-paced environment. For this reason, the majority of training is not formalized, but is acquired on the job. Training is non-stop and extremely wide ranging. If there is a lull in the course of the day, we take the opportunity to work on documentation or brush up our knowledge by reading one of the software reference manuals. The Internet has also proven an invaluable too resource. f:\12000 essays\technology & computers (295)\X Hacking54.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ HACKING Contents ~~~~~~~~ This file will be divided into four parts: Part 1: What is Hacking, A Hacker's Code of Ethics, Basic Hacking Safety Part 2: Packet Switching Networks: Telenet- How it Works, How to Use it, Outdials, Network Servers, Private PADs Part 3: Identifying a Computer, How to Hack In, Operating System Defaults Part 4: Conclusion- Final Thoughts, Books to Read, Boards to Call, Acknowledgements Part One: The Basics ~~~~~~~~~~~~~~~~~~~~ As long as there have been computers, there have been hackers. In the 50's at the Massachusets Institute of Technology (MIT), students devoted much time and energy to ingenious exploration of the computers. Rules and the law were disregarded in their pursuit for the 'hack'. Just as they were enthralled with their pursuit of information, so are we. The thrill of the hack is not in breaking the law, it's in the pursuit and capture of knowledge. To this end, let me contribute my suggestions for guidelines to follow to ensure that not only you stay out of trouble, but you pursue your craft without damaging the computers you hack into or the companies who own them. I. Do not intentionally damage *any* system. II. Do not alter any system files other than ones needed to ensure your escape from detection and your future access (Trojan Horses, Altering Logs, and the like are all necessary to your survival for as long as possible.) III. Do not leave your (or anyone else's) real name, real handle, or real phone number on any system that you access illegally. They *can* and will track you down from your handle! IV. Be careful who you share information with. Feds are getting trickier. Generally, if you don't know their voice phone number, name, and occupation or haven't spoken with them voice on non-info trading conversations, be wary. V. Do not leave your real phone number to anyone you don't know. This includes logging on boards, no matter how k-rad they seem. If you don't know the sysop, leave a note telling some trustworthy people that will validate you. VI. Do not hack government computers. Yes, there are government systems that are safe to hack, but they are few and far between. And the government has inifitely more time and resources to track you down than a company who has to make a profit and justify expenses. VII. Don't use codes unless there is *NO* way around it (you don't have a local telenet or tymnet outdial and can't connect to anything 800...) You use codes long enough, you will get caught. Period. VIII. Don't be afraid to be paranoid. Remember, you *are* breaking the law. It doesn't hurt to store everything encrypted on your hard disk, or keep your notes buried in the backyard or in the trunk of your car. You may feel a little funny, but you'll feel a lot funnier when you when you meet Bruno, your transvestite cellmate who axed his family to death. IX. Watch what you post on boards. Most of the really great hackers in the country post *nothing* about the system they're currently working except in the broadest sense (I'm working on a UNIX, or a COSMOS, or something generic. Not "I'm hacking into General Electric's Voice Mail System" or something inane and revealing like that.) X. Don't be afraid to ask questions. That's what more experienced hackers are for. Don't expect *everything* you ask to be answered, though. There are some things (LMOS, for instance) that a begining hacker shouldn't mess with. You'll either get caught, or screw it up for others, or both. XI. Finally, you have to actually hack. You can hang out on boards all you want, and you can read all the text files in the world, but until you actually start doing it, you'll never know what it's all about. There's no thrill quite the same as getting into your first system (well, ok, I can think of a couple of bigger thrills, but you get the picture.) One of the safest places to start your hacking career is on a computer system belonging to a college. University computers have notoriously lax security, and are more used to hackers, as every college computer depart- ment has one or two, so are less likely to press charges if you should be detected. But the odds of them detecting you and having the personel to committ to tracking you down are slim as long as you aren't destructive. If you are already a college student, this is ideal, as you can legally explore your computer system to your heart's desire, then go out and look for similar systems that you can penetrate with confidence, as you're already familar with them. So if you just want to get your feet wet, call your local college. Many of them will provide accounts for local residents at a nominal (under $20) charge. Finally, if you get caught, stay quiet until you get a lawyer. Don't vol- unteer any information, no matter what kind of 'deals' they offer you. Nothing is binding unless you make the deal through your lawyer, so you might as well shut up and wait. Part Two: Networks ~~~~~~~~~~~~~~~~~~ The best place to begin hacking (other than a college) is on one of the bigger networks such as Telenet. Why? First, there is a wide variety of computers to choose from, from small Micro-Vaxen to huge Crays. Second, the networks are fairly well documented. It's easier to find someone who can help you with a problem off of Telenet than it is to find assistance concerning your local college computer or high school machine. Third, the networks are safer. Because of the enormous number of calls that are fielded every day by the big networks, it is not financially practical to keep track of where every call and connection are made from. It is also very easy to disguise your location using the network, which makes your hobby much more secure. Telenet has more computers hooked to it than any other system in the world once you consider that from Telenet you have access to Tymnet, ItaPAC, JANET, DATAPAC, SBDN, PandaNet, THEnet, and a whole host of other networks, all of which you can connect to from your terminal. The first step that you need to take is to identify your local dialup port. This is done by dialing 1-800-424-9494 (1200 7E1) and connecting. It will spout some garbage at you and then you'll get a prompt saying 'TERMINAL='. This is your terminal type. If you have vt100 emulation, type it in now. Or just hit return and it will default to dumb terminal mode. You'll now get a prompt that looks like a @. From here, type @c mail and then it will ask for a Username. Enter 'phones' for the username. When it asks for a password, enter 'phones' again. From this point, it is menu driven. Use this to locate your local dialup, and call it back locally. If you don't have a local dialup, then use whatever means you wish to connect to one long distance (more on this later.) When you call your local dialup, you will once again go through the TERMINAL= stuff, and once again you'll be presented with a @. This prompt lets you know you are connected to a Telenet PAD. PAD stands for either Packet Assembler/Disassembler (if you talk to an engineer), or Public Access Device (if you talk to Telenet's marketing people.) The first description is more correct. Telenet works by taking the data you enter in on the PAD you dialed into, bundling it into a 128 byte chunk (normally... this can be changed), and then transmitting it at speeds ranging from 9600 to 19,200 baud to another PAD, who then takes the data and hands it down to whatever computer or system it's connected to. Basically, the PAD allows two computers that have different baud rates or communication protocols to communicate with each other over a long distance. Sometimes you'll notice a time lag in the remote machines response. This is called PAD Delay, and is to be expected when you're sending data through several different links. What do you do with this PAD? You use it to connect to remote computer systems by typing 'C' for connect and then the Network User Address (NUA) of the system you want to go to. An NUA takes the form of 031103130002520 \___/\___/\___/ | | | | | |____ network address | |_________ area prefix |______________ DNIC This is a summary of DNIC's (taken from Blade Runner's file on ItaPAC) according to their country and network name. DNIC Network Name Country DNIC Network Name Country _______________________________________________________________________________ | 02041 Datanet 1 Netherlands | 03110 Telenet USA 02062 DCS Belgium | 03340 Telepac Mexico 02080 Transpac France | 03400 UDTS-Curacau Curacau 02284 Telepac Switzerland | 04251 Isranet Israel 02322 Datex-P Austria | 04401 DDX-P Japan 02329 Radaus Austria | 04408 Venus-P Japan 02342 PSS UK | 04501 Dacom-Net South Korea 02382 Datapak Denmark | 04542 Intelpak Singapore 02402 Datapak Sweden | 05052 Austpac Australia 02405 Telepak Sweden | 05053 Midas Australia 02442 Finpak Finland | 05252 Telepac Hong Kong 02624 Datex-P West Germany | 05301 Pacnet New Zealand 02704 Luxpac Luxembourg | 06550 Saponet South Africa 02724 Eirpak Ireland | 07240 Interdata Brazil 03020 Datapac Canada | 07241 Renpac Brazil 03028 Infogram Canada | 09000 Dialnet USA 03103 ITT/UDTS USA | 07421 Dompac French Guiana 03106 Tymnet USA | There are two ways to find interesting addresses to connect to. The first and easiest way is to obtain a copy of the LOD/H Telenet Directory from the LOD/H Technical Journal #4 or 2600 Magazine. Jester Sluggo also put out a good list of non-US addresses in Phrack Inc. Newsletter Issue 21. These files will tell you the NUA, whether it will accept collect calls or not, what type of computer system it is (if known) and who it belongs to (also if known.) The second method of locating interesting addresses is to scan for them manually. On Telenet, you do not have to enter the 03110 DNIC to connect to a Telenet host. So if you saw that 031104120006140 had a VAX on it you wanted to look at, you could type @c 412 614 (0's can be ignored most of the time.) If this node allows collect billed connections, it will say 412 614 CONNECTED and then you'll possibly get an identifying header or just a Username: prompt. If it doesn't allow collect connections, it will give you a message such as 412 614 REFUSED COLLECT CONNECTION with some error codes out to the right, and return you to the @ prompt. There are two primary ways to get around the REFUSED COLLECT message. The first is to use a Network User Id (NUI) to connect. An NUI is a username/pw combination that acts like a charge account on Telenet. To collect to node 412 614 with NUI junk4248, password 525332, I'd type the following: @c 412 614,junk4248,525332 <---- the 525332 will *not* be echoed to the screen. The problem with NUI's is that they're hard to come by unless you're a good social engineer with a thorough knowledge of Telenet (in which case you probably aren't reading this section), or you have someone who can provide you with them. The second way to connect is to use a private PAD, either through an X.25 PAD or through something like Netlink off of a Prime computer (more on these two below.) The prefix in a Telenet NUA oftentimes (not always) refers to the phone Area Code that the computer is located in (i.e. 713 xxx would be a computer in Houston, Texas.) If there's a particular area you're interested in, (say, New York City 914), you could begin by typing @c 914 001 . If it connects, you make a note of it and go on to 914 002. You do this until you've found some interesting systems to play with. Not all systems are on a simple xxx yyy address. Some go out to four or five digits (914 2354), and some have decimal or numeric extensions (422 121A = 422 121.01). You have to play with them, and you never know what you're going to find. To fully scan out a prefix would take ten million attempts per prefix. For example, if I want to scan 512 completely, I'd have to start with 512 00000.00 and go through 512 00000.99, then increment the address by 1 and try 512 00001.00 through 512 00001.99. A lot of scanning. There are plenty of neat computers to play with in a 3-digit scan, however, so don't go berserk with the extensions. Sometimes you'll attempt to connect and it will just be sitting there after one or two minutes. In this case, you want to abort the connect attempt by sending a hard break (this varies with different term programs, on Procomm, it's ALT-B), and then when you get the @ prompt back, type 'D' for disconnect. If you connect to a computer and wish to disconnect, you can type @ and you it should say TELENET and then give you the @ prompt. From there, type D to disconnect or CONT to re-connect and continue your session uninterrupted. Outdials, Network Servers, and PADs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In addition to computers, an NUA may connect you to several other things. One of the most useful is the outdial. An outdial is nothing more than a modem you can get to over telenet- similar to the PC Pursuit concept, except that these don't have passwords on them most of the time. When you connect, you will get a message like 'Hayes 1200 baud outdial, Detroit, MI', or 'VEN-TEL 212 Modem', or possibly 'Session 1234 established on Modem 5588'. The best way to figure out the commands on these is to type ? or H or HELP- this will get you all the information that you need to use one. Safety tip here- when you are hacking *any* system through a phone dialup, always use an outdial or a diverter, especially if it is a local phone number to you. More people get popped hacking on local computers than you can imagine, Intra-LATA calls are the easiest things in the world to trace inexp- ensively. Another nice trick you can do with an outdial is use the redial or macro function that many of them have. First thing you do when you connect is to invoke the 'Redial Last Number' facility. This will dial the last number used, which will be the one the person using it before you typed. Write down the number, as no one would be calling a number without a computer on it. This is a good way to find new systems to hack. Also, on a VENTEL modem, type 'D' for Display and it will display the five numbers stored as macros in the modem's memory. There are also different types of servers for remote Local Area Networks (LAN) that have many machine all over the office or the nation connected to them. I'll discuss identifying these later in the computer ID section. And finally, you may connect to something that says 'X.25 Communication PAD' and then some more stuff, followed by a new @ prompt. This is a PAD just like the one you are on, except that all attempted connections are billed to the PAD, allowing you to connect to those nodes who earlier refused collect connections. This also has the added bonus of confusing where you are connecting from. When a packet is transmitted from PAD to PAD, it contains a header that has the location you're calling from. For instance, when you first connected to Telenet, it might have said 212 44A CONNECTED if you called from the 212 area code. This means you were calling PAD number 44A in the 212 area. That 21244A will be sent out in the header of all packets leaving the PAD. Once you connect to a private PAD, however, all the packets going out from *it* will have it's address on them, not yours. This can be a valuable buffer between yourself and detection. Phone Scanning ~~~~~~~~~~~~~~ Finally, there's the time-honored method of computer hunting that was made famous among the non-hacker crowd by that Oh-So-Technically-Accurate movie Wargames. You pick a three digit phone prefix in your area and dial every number from 0000 --> 9999 in that prefix, making a note of all the carriers you find. There is software available to do this for nearly every computer in the world, so you don't have to do it by hand. Part Three: I've Found a Computer, Now What? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This next section is applicable universally. It doesn't matter how you found this computer, it could be through a network, or it could be from carrier scanning your High School's phone prefix, you've got this prompt this prompt, what the hell is it? I'm *NOT* going to attempt to tell you what to do once you're inside of any of these operating systems. Each one is worth several G-files in its own right. I'm going to tell you how to identify and recognize certain OpSystems, how to approach hacking into them, and how to deal with something that you've never seen before and have know idea what it is. VMS- The VAX computer is made by Digital Equipment Corporation (DEC), and runs the VMS (Virtual Memory System) operating system. VMS is characterized by the 'Username:' prompt. It will not tell you if you've entered a valid username or not, and will disconnect you after three bad login attempts. It also keeps track of all failed login attempts and informs the owner of the account next time s/he logs in how many bad login attempts were made on the account. It is one of the most secure operating systems around from the outside, but once you're in there are many things that you can do to circumvent system security. The VAX also has the best set of help files in the world. Just type HELP and read to your heart's content. Common Accounts/Defaults: [username: password [[,password]] ] SYSTEM: OPERATOR or MANAGER or SYSTEM or SYSLIB OPERATOR: OPERATOR SYSTEST: UETP SYSMAINT: SYSMAINT or SERVICE or DIGITAL FIELD: FIELD or SERVICE GUEST: GUEST or unpassworded DEMO: DEMO or unpassworded DECNET: DECNET DEC-10- An earlier line of DEC computer equipment, running the TOPS-10 operating system. These machines are recognized by their '.' prompt. The DEC-10/20 series are remarkably hacker-friendly, allowing you to enter several important commands without ever logging into the system. Accounts are in the format [xxx,yyy] where xxx and yyy are integers. You can get a listing of the accounts and the process names of everyone on the system before logging in with the command .systat (for SYstem STATus). If you seen an account that reads [234,1001] BOB JONES, it might be wise to try BOB or JONES or both for a password on this account. To login, you type .login xxx,yyy and then type the password when prompted for it. The system will allow you unlimited tries at an account, and does not keep records of bad login attempts. It will also inform you if the UIC you're trying (UIC = User Identification Code, 1,2 for example) is bad. Common Accounts/Defaults: 1,2: SYSLIB or OPERATOR or MANAGER 2,7: MAINTAIN 5,30: GAMES UNIX- There are dozens of different machines out there that run UNIX. While some might argue it isn't the best operating system in the world, it is certainly the most widely used. A UNIX system will usually have a prompt like 'login:' in lower case. UNIX also will give you unlimited shots at logging in (in most cases), and there is usually no log kept of bad attempts. Common Accounts/Defaults: (note that some systems are case sensitive, so use lower case as a general rule. Also, many times the accounts will be unpassworded, you'll just drop right in!) root: root admin: admin sysadmin: sysadmin or admin unix: unix uucp: uucp rje: rje guest: guest demo: demo daemon: daemon sysbin: sysbin Prime- Prime computer company's mainframe running the Primos operating system. The are easy to spot, as the greet you with 'Primecon 18.23.05' or the like, depending on the version of the operating system you run into. There will usually be no prompt offered, it will just look like it's sitting there. At this point, type 'login '. If it is a pre-18.00.00 version of Primos, you can hit a bunch of ^C's for the password and you'll drop in. Unfortunately, most people are running versions 19+. Primos also comes with a good set of help files. One of the most useful features of a Prime on Telenet is a facility called NETLINK. Once you're inside, type NETLINK and follow the help files. This allows you to connect to NUA's all over the world using the 'nc' command. For example, to connect to NUA 026245890040004, you would type @nc :26245890040004 at the netlink prompt. Common Accounts/Defaults: PRIME PRIME or PRIMOS PRIMOS_CS PRIME or PRIMOS PRIMENET PRIMENET SYSTEM SYSTEM or PRIME NETLINK NETLINK TEST TEST GUEST GUEST GUEST1 GUEST HP-x000- This system is made by Hewlett-Packard. It is characterized by the ':' prompt. The HP has one of the more complicated login sequences around- you type 'HELLO SESSION NAME,USERNAME,ACCOUNTNAME,GROUP'. Fortunately, some of these fields can be left blank in many cases. Since any and all of these fields can be passworded, this is not the easiest system to get into, except for the fact that there are usually some unpassworded accounts around. In general, if the defaults don't work, you'll have to brute force it using the common password list (see below.) The HP-x000 runs the MPE operat- ing system, the prompt for it will be a ':', just like the logon prompt. Common Accounts/Defaults: MGR.TELESUP,PUB User: MGR Acct: HPONLY Grp: PUB MGR.HPOFFICE,PUB unpassworded MANAGER.ITF3000,PUB unpassworded FIELD.SUPPORT,PUB user: FLD, others unpassworded MAIL.TELESUP,PUB user: MAIL, others unpassworded MGR.RJE unpassworded FIELD.HPPl89 ,HPPl87,HPPl89,HPPl96 unpassworded MGR.TELESUP,PUB,HPONLY,HP3 unpassworded IRIS- IRIS stands for Interactive Real Time Information System. It orig- inally ran on PDP-11's, but now runs on many other minis. You can spot an IRIS by the 'Welcome to "IRIS" R9.1.4 Timesharing' banner, and the ACCOUNT ID? prompt. IRIS allows unlimited tries at hacking in, and keeps no logs of bad attempts. I don't know any default passwords, so just try the common ones from the password database below. Common Accounts: MANAGER BOSS SOFTWARE DEMO PDP8 PDP11 ACCOUNTING VM/CMS- The VM/CMS operating system runs in International Business Machines (IBM) mainframes. When you connect to one of these, you will get message similar to 'VM/370 ONLINE', and then give you a '.' prompt, just like TOPS-10 does. To login, you type 'LOGON '. Common Accounts/Defaults are: AUTOLOG1: AUTOLOG or AUTOLOG1 CMS: CMS CMSBATCH: CMS or CMSBATCH EREP: EREP MAINT: MAINT or MAINTAIN OPERATNS: OPERATNS or OPERATOR OPERATOR: OPERATOR RSCS: RSCS SMART: SMART SNA: SNA VMTEST: VMTEST VMUTIL: VMUTIL VTAM: VTAM NOS- NOS stands for Networking Operating System, and runs on the Cyber computer made by Control Data Corporation. NOS identifies itself quite readily, with a banner of 'WELCOME TO THE NOS SOFTWARE SYSTEM. COPYRIGHT CONTROL DATA 1978,1987'. The first prompt you will get will be FAMILY:. Just hit return here. Then you'll get a USER NAME: prompt. Usernames are typically 7 alpha-numerics characters long, and are *extremely* site dependent. Operator accounts begin with a digit, such as 7ETPDOC. Common Accounts/Defaults: $SYSTEM unknown SYSTEMV unknown Decserver- This is not truly a computer system, but is a network server that has many different machines available from it. A Decserver will say 'Enter Username>' when you first connect. This can be anything, it doesn't matter, it's just an identifier. Type 'c', as this is the least conspicuous thing to enter. It will then present you with a 'Local>' prompt. From here, you type 'c ' to connect to a system. To get a list of system names, type 'sh services' or 'sh nodes'. If you have any problems, online help is available with the 'help' command. Be sure and look for services named 'MODEM' or 'DIAL' or something similar, these are often outdial modems and can be useful! GS/1- Another type of network server. Unlike a Decserver, you can't predict what prompt a GS/1 gateway is going to give you. The default prompt it 'GS/1>', but this is redifinable by the system administrator. To test for a GS/1, do a 'sh d'. If that prints out a large list of defaults (terminal speed, prompt, parity, etc...), you are on a GS/1. You connect in the same manner as a Decserver, typing 'c '. To find out what systems are available, do a 'sh n' or a 'sh c'. Another trick is to do a 'sh m', which will sometimes show you a list of macros for logging onto a system. If there is a macro named VAX, for instance, type 'do VAX'. The above are the main system types in use today. There are hundreds of minor variants on the above, but this should be enough to get you started. Unresponsive Systems ~~~~~~~~~~~~~~~~~~~~ Occasionally you will connect to a system that will do nothing but sit there. This is a frustrating feeling, but a methodical approach to the system will yield a response if you take your time. The following list will usually make *something* happen. 1) Change your parity, data length, and stop bits. A system that won't re- spond at 8N1 may react at 7E1 or 8E2 or 7S2. If you don't have a term program that will let you set parity to EVEN, ODD, SPACE, MARK, and NONE, with data length of 7 or 8, and 1 or 2 stop bits, go out and buy one. While having a good term program isn't absolutely necessary, it sure is helpful. 2) Change baud rates. Again, if your term program will let you choose odd baud rates such as 600 or 1100, you will occasionally be able to penetrate some very interesting systems, as most systems that depend on a strange baud rate seem to think that this is all the security they need... 3) Send a series of 's. 4) Send a hard break followed by a . 5) Type a series of .'s (periods). The Canadian network Datapac responds to this. 6) If you're getting garbage, hit an 'i'. Tymnet responds to this, as does a MultiLink II. 7) Begin sending control characters, starting with ^A --> ^Z. 8) Change terminal emulations. What your vt100 emulation thinks is garbage may all of a sudden become crystal clear using ADM-5 emulation. This also relates to how good your term program is. 9) Type LOGIN, HELLO, LOG, ATTACH, CONNECT, START, RUN, BEGIN, LOGON, GO, JOIN, HELP, and anything else f:\12000 essays\technology & computers (295)\X Hacking56.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ NEW CORDLESS TELEPHONE FREQUENCY LISTINGS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CHANNEL BASE PORTABLE TELEPHONE ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~ 1 46.610 49.670 2 46.630 49.845* 3 46.670 49.860* 4 46.710 49.770 5 46.730 49.875* 6 46.770 49.830* 7 46.830 49.890* 8 46.870 49.930 9 46.930 49.990 10 46.970 49.970 Some of the older cordless phones using the frequencies marked by the <*> asterisk are paired with frequencies around 1.7 MHz. Listening to the 1.7 MHz side will yield both sides of the conversation. The best frequencies to monitor are the 46 MHz as they will repeat both sides of the conversation. Power output of both base and hand units are less than 100 Mw or 1/10 watt so the range is limited. Careful monitoring will produce some outstanding results. It is not uncommon to hear conversations up to a mile away. f:\12000 essays\technology & computers (295)\X Hacking59.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Jenna Gray p.1 Another one got caught today, it's all over the papers. "Teenager Areested in Computer Crime Scandal", "Hacker Arrested after Bank Tampering"....Damn kids. They're all alike. But did you , in your three-piece psychology and 1950's technobrain, ever take a look behind the eyes of the hacker? Did you ever wonder what made him tick, what forces shaped him, what may have molded him? I am a hacker, enter my world...Mine is a world that begins with school... I'm in junior high or high school. I've listened to teachers expain for the fifteenth time how to reduce a fraction. I understand it "No, Ms. Smith, I didn't show my work. I did it in my head..." Damn kid . Probably copied it. They're all alike. I made a discovery today. I found a computer. Wait a second, this is cool. It does what I want it to. If it makes a mistake, it's because i screwed it up. Not because it doesn't like me... Or feels threatened by me.. Or thinks I'm a smart ass... Or doesn't like teaching and shouldn't be here... Damn kid. All he does is play games. They're all alike. And then it happened... a door opened to a world... rushing through the phone line like heroin through an addict's veins, an electronic pulse is sent out, a refuge from the day-to-day incompetencies is sought... a board is found. "This is it... this is where I belong... "I know everyone here... even if I've never met them, never talked to them, may never hear from them again... I know you all... Damn kid. Tying up the phone line again. They're all alike... you bet you ass we're all alike... we've been spoon-fed baby food at school when we hungered for steak.. the bits of meat that you did let slip through were pre-chewed and tasteless. We've been dominated by sadists, or ignored by the apathetic. The few that had something to teach found us willing pupils, but those few are like drops of water in the desert. This is our world now... the world of the electron and the switch, the beauty of the baud. We make use of a service already existing without paying for what could be dirt-cheap if it wasn't run by profiterring gluttons, and you call us criminals. We explore... and you call us criminals. We seek after knowledge.. and you call us criminals. We exist without skin color, without nationality, without religious bias...and you call us criminals? Yes, I am a criminal. My crime is that of curiosity. My crime is that of judging people by what they say and think , not what they look like. My crime is that of out smarting you, something that you will never forgive me for. I am a hacker, and this is my manifesto. you may stop this individual, but you can't stop us all... after all, we're all alike. Hacking is a serious offense. And I think that I agree w/Jansie Kotze's theories and she explains a lot of the things that I was wondering about. I think that she had a lot to say, and she did it very well. On the other hand, the boy who is supposedly in junior high has serious problems. After reading the page he had written, it concluded that I was #27,461 to read his homepage. I was shocked.......and I wondered how many little kids under 12 years old had been in there and got all sorts of ideas.....bad ideas. He had everything from viruses to download and other tips. He even had a "cookbook" that he called the infection connection. I never thought about hacking and phreaking all too much until now. Sure, kids give ea. other little viruses for kicks, but when you can break passwords and break security grounds, that is getting out of hand. I think that it is all just a wanting to know. He even concluded in a paragraph that he said that we could easily find out his phone # and track him down. He even put a smily face after the sentence.... Like it was a joke to be stalked. I think there are a lot of sick people in this world that aren't computer nerds. They are maniacs that have fun torturing other people, and proving they can break into any file, do anything, and nothing will stop them. There was so much information that it was mind boggling. I didn't want to look at it, because it was really is intriguing..interesting. But I finally concluded that I was actually scared. In those chatrooms, people can send viruses just like that. Mess up your whole netscape if you open the file that they send you. Even in this particular guy's homepage, he could have planted a virus so that when you went into a particular area, a virus would download. I was cautious of where I went, but I think that they wouldn't do that. That would give them away to easily. I think that the reason why they put up a homepage, is to prove their knowledge is great.........and they are actually competing one another to be the best....and not get caught. This guy was a kid himself..and he was mocking us all. I don't see why he needed to brag, but I guess every kid wants to be noticed. Well, in my mind, he succeeded. He is living in the future technology...and way more advanced than even I. I envy him in some ways, but then I don't. Just one slip and they catch him, bam, he's in jail. I think I'll live on the safe track and use the technology wisely and respect others that also have the same technology and knowledges as I. f:\12000 essays\technology & computers (295)\X Internet39.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Internet Introduction w What is the internet? w Why you should have it? w I have it Body w Who sells the internet? w Equipment that's is needed? w Legal/illegal implications? w Accessibility- which browser? w Material w Protection w Upload/download Conclusion w Advantages? w What can it do for you? w New world Internet The internet is a service that is available on computer to subscribers. The internet opens up a whole new world of communication, information and entertainment. having access to the internet has given me a chance to explore a totally new dimension in technology. I believe that any body who has access to the internet will benefit greatly from this experience. The internet service is marketed by various companies which are on the increase on a daily basis. These companies basically offer the same package but differ in the amounts they charge for membership. In my opinion a good company to subscribe to is one that offers a flat rate. In order to access the internet you require a good computer and a powerful modem . If you have these it is much easier and faster to "Surf the Net". I would recommend a 28.8 kbs modem manufactured by U.S Robotics. The Internet can give you access to both legal & illegal sites on the net. There is pirated software e.g. full version of games that you can access without actually paying for them. The internet can only be accessed with a browser. There are a few web browsers but two main ones are Netscape Navigator and Internet Explorer. As I mentioned earlier, the internet allows everyone to access various topics of interests on the web. The choices ranges from recreational, education, hobbies, communications and entertainment. There is always a risk of accessing material which are not appropriate e.g. pornographic material and racist material which are all available to anyone who wishes to view them. However, there is a way of protecting children who should not be viewing such material. There is software like the Internet Nanny , Adult lock & Firewall which once installed will protect children from viewing. One of the disadvantages is that while downloading files from the internet there is a possibility of downloading a virus into your system. In order to prevent this from happening I suggest you only download from trustworthy and reliable sites. Uploading files is basically giving a file to someone else. The internet is a very powerful tool to have if used in the right way. From the comfort of your own home you can surf the net and find the power that lies within the web. I would recommend that if possible everyone should at some time or the other have access to the internet. BY ALNUR ISAMIL 8-6 f:\12000 essays\technology & computers (295)\X Software Piracy24.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Software Piracy What is Software Piracy The PC industry is just over 20 years old. In those 20 years, both the quality and quantity of available software programs have increased dramatically. Although approximately 70% of the worldwide market is today supplied by developers in the United States, significant development work is occurring in scores of nations around the world. But in both the United States and abroad, unauthorized copying of personal computer software is a serious problem. On average, for every authorized copy of personal computer software in use, at least one unauthorized copy is made. Unauthorized copying is known as software piracy, and in 1994 it cost the software industry in excess of US$15 billion. Piracy is widely practiced and widely tolerated. In some countries, legal protection for software is nonexistent (i.e., Kuwait); in others, laws are unclear (i.e. Israel), or not enforced with sufficient commitment (i.e., the PRC). Significant piracy losses are suffered in virtually every region of the world. In some areas (i.e., Indonesia), the rate of unauthorized copies is believed to be in excess of 99%. Why do People Use Pirated Software? A major reason for the use of pirated software is the prices of the REAL thing. Just walk into a CompUSA, Electronics Boutique, Computer City, Egghead, etc and you will notice the expensive price tags on copies of the most commonly used programs and the hottest games. Take the recent Midwest Micro holiday catalogue for example and notice the prices. Microsoft Windows 95: $94, Microsoft Office 95: $224, Microsoft Visual C++: $250, Borland C++: $213, Corel Draw 7: $229, Corel Office Professional 7: $190, Lotus Smartsuite 96: $150, Microsoft Flight Simulator95: $50, Warcraft 2: $30. The list goes on and on and the prices for the programs listed above were only upgrade versions. Users of the software listed above include anywhere from large companies like AT&T to yourself, the average user at home. Although a $30 game like Warcraft 2 doesn't seem like much, by the time you finish reading this paper, it will seem like a fortune. Ease of Availability Since the law states clearly that making a copy of what you own and distributing it or installing more than one copy of one piece of software on two separate computers is illegal, then why do the average Joes like you and us still do it? There are many answers to that question and all of them seem legitimate except that no answers can be legally justified. A friend borrowing another friend's Corel draw or Windows 95 to install on their own PC is so common that the issue of piracy probably doesn't even come to mind right away or even at all. Pirated Software on the Internet The Internet is sometimes referred to as a "Pirate's Heaven." Pirated software is available all over the net if you bother to look for them. Just go to any of the popular search engines like Excite, Infoseek or Yahoo and type in the common phrase "warez, appz, gamez, hacks" and thousands of search results will come up. Although many of the links on the pages will be broken because the people have either moved the page or had the page shut down, some of the links will work and that one link usually has a decent amount of stuff for you to leech off of or a better way to put it is for you to download. Web Sites That we Have Personally Visited: Jelle's Warez Collection Wazh's Warez Page Beg's Warez Page Chovy's Empire The Spawning Grounds GAMEZ Lmax's Warez Page Jugg's Warez-List Jureweb Warez Page Top Warez Page Why Are They There? Why is there pirated software on the net? There could only be two possible answers. Either the people who upload these files are very nice people or they do it just because its illegal and browsers of the web like us wouldn't mind taking our time to visit these sites to download the software. What they get out of it is the thousands of "hits" their sites get a day which makes them very happy. Anonymous and Account-Based FTP Sites FTP stands for File Transfer Protocol. FTP sites are around so that people can exchange software with each other and companies like Microsoft can distribute info and demos to users who visit their FTP site. Something they don't want happening is the distribution of their full-release products on "Pirate" FTP sites. "Pirate" FTP sites come and go. Most sites don't stay up for more than a day or two. They are also referred to as 0 day FTP sites. Its extremely difficult to logon to these sites becasuse they are usually full of leechers like us or require a username and password. FTP Sites That we Have Visited: ftp://ftp.epri.com ftp://ftp.dcs.gla.ac.uk ftp://204.177.0.18 ftp://207.48.187.133 ftp://192.88.237.2 ftp://153.104.11.94 ftp://208.137.11.105 ftp://194.85.157.2 Newsgroups There are over 20,000 newsgroups on the net. The majority of them are nonsense but if you happen to stumble upon the right one, you'll be able to get almost any crackor serial number for any game or program. Although programs and games are not abundant on newsgroups, you'll be able to obtain registered programs of such popular shareware like Winzip and Mirc and if you post trade requests, people will respond to your request. Newsgroups With Cracks, Serial #'s, Programs and Games News:alt.binaries.cracks News:alt.binaries.games News:alt.crackers News:alt.cracks News:alt.hacker News:alt.binaries.warez.ibm-pc News:alt.binaries.warez.ibm-pc.games News:alt.warez.ibm-pc Exchanging Through E-Mail It is illegal to send copyrighted programs and games through e-mail but does anyone really care? Everyday, there are hundreds and thousands of illegally attached programs and games sent through the net in the form of e-mail. Just visit any of the above newsgroups and you'll see listings of people who want to trade through e-mail. We placed an ad in news:alt.binaries.cracks requesting three programs: Magnaram 97, Qemm8.0 and Corel Draw 7. We managed to receive both Magnaram 97 and Qemm 8.0 through e-mail from some nice person but did not receive Corel Draw 7 most likely because it was not a reasonable demand. Modem Speeds Part of the reason nobody sent us Corel Draw 7 is because of the size of the program and the many hours it takes to upload and download it. The two most common modem speeds at the time that this report was written are 28.8kbps and 14.4kbps. Both speeds are considered to be extremely slow when it comes to transferring enormous amounts of data. Most of the programs and games nowadays are on CD-Roms which if full, contain 650MB of data. The new X2 Technology, Cable modems, ISDN modems and DirecPC satellite dishes could solve the long download time problems a little better considering that all the above mentioned modems are two to fourteen times faster in transferring data than the 28.8kbps modems. Cost of Pirated Software To The Industry Piracy cost companies that produce computer software $13.1 billion in lost revenue during 1995. The loss exceeded more than the combined revenues of the 10 largest personal computer software companies. The dollar loss estimates were up from the $12.2 billion in 1994 because of the spreading use of computers worldwide. Microsoft (The Big Loser) MS Windows 95 $179 MS Office Pro 95 $535 MS Project 95 $419 MS Publisher 97 $69 MS Visual C++ 4.0 $448 These are the prices they expect people to buy their software at. In Hong Kong, copies of these lucrative pieces software can be had for about five US dollars for all of them on one CD very easily. That will be further explained later. The Honest Consumer Software piracy harms all software companies and, ultimately, the end user. Piracy results in higher prices for honest users, reduced levels of support and delays in funding and development of new products, causing the overall breadth and quality of software to suffer. US Laws In 1964, the United States Copyright Office began to register software as a form of literary expression. The Copyright Act, Title 17 of the U.S. Code, was amended in 1980 to explicitly include computer programs. Today, according to the Copyright Act, it is illegal to make or distribute copyrighted material without authorization. The only exceptions are the user's right to make a copy as an "essential step" in using the program (for example, by copying the program into RAM) and to make a single backup copy for archival purposes (Title 17, Section 117). No other copies may be made without specific authorization from the copyright owner. In December 1990, the U.S. Congress approved the Software Rental Amendments Act, which generally prohibits the rental, leasing or lending of software without the express written permission of the copyright holder. This amendment followed the lead of the British Parliament (which passed a similar law, The Copyright, Designs and Patents Act, in 1988), and adds significant additional protection against unauthorized copying of personal computer software. In addition, the copyright holder may grant additional rights at the time the personal computer software is acquired. For example, many applications are sold in LAN (local area network) versions that allow a software package to be placed on a LAN for access by multiple users. Additionally, permission is given under special license agreement to make multiple copies for use throughout a large organization. But unless these rights are specifically granted, U.S. law prohibits a user from making duplicate copies of software except to ensure one working copy and one archival copy. Without authorization from the copyright owner, Title 18 of U.S. Code prohibits duplicating software for profit, making multiple copies for use by different users within an organization, downloading multiple copies from a network, or giving an unauthorized copy to another individual. All are illegal and a federal crime. Penalties include fines of up to $250,000 and jail terms up to five years (Title 18, Section 2320 and 2322). Business Software Alliance (BSA) The Business Software Alliance (BSA) promotes the continued growth of the software industry through its international public policy, enforcement, and education programs in 65 countries throughout North America, Europe, Asia, and Latin America. Founded in 1988, BSA's mission is to advance free and open world trade for legitimate business software by advocating strong intellectual property protection for software. BSA's worldwide members include the leading publishers of software for personal computers such as Adobe Systems, Inc., Apple Computer, Inc., Autodesk, Inc., Bentley Systems, Inc., Lotus Development Corp., Microsoft Corp., Novell, Inc., Symantec Corp., and The Santa Cruz Operation, Inc. BSA's Policy Council consists of these publishers and other leading computer technology companies including Apple Computer Inc., Computer Associates International, Inc., Digital Equipment Corp., IBM Corp., Intel Corp., and Sybase, Inc. Statistics of Software Piracy. Court Cases Inslaw vs. Dept. of Justice -Sued Justice Dept for Software piracy. -In 1982, Inslaw landed a $10M contract with the Justice Dept. to install PROMIS case-tracking software in 20 offices. -They allegedly spent $8M enhancing PROMIS on the assumption that they could renogotiate the contract to recoup the expenses. -But after the Justice Dept. got the source code, they terminated the contract pirated the code -By 1985, Inslaw was forced into bankruptcy. -Owners kept fighting and the case ended up in the US Bankruptcy Court -In Feb. '88, Inslaw was awarded $6.8M in damages plus legal fees Novell and Microsoft Settle Largest BBS Piracy Case Ever -Scott W. Morris, operator of the Assassin's Guild BBS, agreed to pay Microsoft and Novell $73,00 in cash and forfeit computer hardware valued at More than $40,000 -In the raid, marshals seized 13 computers, 11 modems, a satellite dish, 9 gigs of online data, and over 40 gigs of off-line data Novell Files Software Piracy Suits Against 17 Companies in California -The suits allege that the defendants were fraudulently obtaining Novell upgrades and/or counterfeiting NetWare boxes to give the appearance of a a new product -The suit follows Novell's discovery that the upgrade product was being sold in Indonesia, the United Kingdom, United Arab Emirates, as well as the US F.B.I. Reveals Arrest in Major CD-Rom Piracy Case -The first major case of CD-Rom piracy in the United States -A Canadian father and son were found in possession of 15,000 counterfeit copies of Rebel Assault and Myst that were being sold at 25% of the retail value -Both men were free on bail Pirated Software in Asia and the Rest of the World Pirate Plants in China The Chinese government says there are 34 factories in China producing compact discs and laser discs. Authorities say most have legitimate licenses to produce legal CDs. But production capacity far outstrips domestic demand. According to the International Intellectual Property Alliance, a Washington, D.C.-based consortium of film, music, computer software and publishing businesses, China produces an estimated 100 million pirated CDs a year, while its domestic market is only 5 million to 7 million CDs annually. Where is the oversupply going? To Hong Kong, and then overseas. Another major problem is that Chinese officials and soldiers have money invested to these factories so no matter how hard the US pushes China to close down these factories, the Chinese government will have a laid back approach. Software piracy in Asia is connected to organized crime. Vendors in Hong Kong The Golden Shopping Arcade in Hong Kong's Sham Shui Po district is a software pirate's dream and software companies nightmare. Here you can buy Cd's called Installer discs for about nine dollars US. All volumes of these installers contain 50+ programs each compressed with a self-extracting utility. Volume 2 has a beta copy of Windows 95 as well as OS/2 Warp, CorelDraw! 5, Quicken 4.0, Atari Action Pack for Windows, Norton Commander, KeyCad, Adobe Premier, Microsoft Office, and dozens of other applications, including a handful written in Chinese. The programs on this disc cost around $20,00-$35,000 US retail. It is very common for a store to be closed for a portion of the day and then reopen later because of raids from authorities. These stores as you can expect are extremely crowded with kids and tourists. US Tourists A good number of Americans who travel to Hong Kong or another part of Asia will bring home pirated software of some sort because of the very low prices for expensive pieces of software here in the US. The usual way to do it is to stuff the cd's in clothes and hand carried luggage. Another approach is sending them back to the US using the postal service. Both of these methods work very well. We have had relatives who have done this for us and the success rate thus far is 100%. The United States Customs Service has been trained in the apprehension of software pirates at ports of entry but this is a joke because they are more worried about illegal immigrants and terrorists rather than software pirates. Software Piracy What is Software Piracy The PC industry is just over 20 years old. In those 20 years, both the quality and quantity of available software programs have increased dramatically. Although approximately 70% of the worldwide market is today supplied by developers in the United States, significant development work is occurring in scores of nations around the world. But in both the United States and abroad, unauthorized copying of personal computer software is a serious problem. On average, for every authorized copy of personal computer software in use, at least one unauthorized copy is made. Unauthorized copying is known as software piracy, and in 1994 it cost the software industry in excess of US$15 billion. Piracy is widely practiced and widely tolerated. In some countries, legal protection for software is nonexistent (i.e., Kuwait); in others, laws are unclear (i.e. Israel), or not enforced with sufficient commitment (i.e., the PRC). Significant piracy losses are suffered in virtually every region of the world. In some areas (i.e., Indonesia), the rate of unauthorized copies is believed to be in excess of 99%. Why do People Use Pirated Software? A major reason for the use of pirated software is the prices of the REAL thing. Just walk into a CompUSA, Electronics Boutique, Computer City, Egghead, etc and you will notice the expensive price tags on copies of the most commonly used programs and the hottest games. Take the recent Midwest Micro holiday catalogue for example and notice the prices. Microsoft Windows 95: $94, Microsoft Office 95: $224, Microsoft Visual C++: $250, Borland C++: $213, Corel Draw 7: $229, Corel Office Professional 7: $190, Lotus Smartsuite 96: $150, Microsoft Flight Simulator95: $50, Warcraft 2: $30. The list goes on and on and the prices for the programs listed above were only upgrade versions. Users of the software listed above include anywhere from large companies like AT&T to yourself, the average user at home. Although a $30 game like Warcraft 2 doesn't seem like much, by the time you finish reading this paper, it will seem like a fortune. Ease of Availability Since the law states clearly that making a copy of what you own and distributing it or installing more than one copy of one piece of software on two separate computers is illegal, then why do the average Joes like you and us still do it? There are many answers to that question and all of them seem legitimate except that no answers can be legally justified. A friend borrowing another friend's Corel draw or Windows 95 to install on their own PC is so common that the issue of piracy probably doesn't even come to mind right away or even at all. Pirated Software on the Internet The Internet is sometimes referred to as a "Pirate's Heaven." Pirated software is available all over the net if you bother to look for them. Just go to any of the popular search engines like Excite, Infoseek or Yahoo and type in the common phrase "warez, appz, gamez, hacks" and thousands of search results will come up. Although many of the links on the pages will be broken because the people have either moved the page or had the page shut down, some of the links will work and that one link usually has a decent amount of stuff for you to leech off of or a better way to put it is for you to download. Web Sites That we Have Personally Visited: Jelle's Warez Collection Wazh's Warez Page Beg's Warez Page Chovy's Empire The Spawning Grounds GAMEZ Lmax's Warez Page Jugg's Warez-List Jureweb Warez Page Top Warez Page Why Are They There? Why is there pirated software on the net? There could only be two possible answers. Either the people who upload these files are very nice people or they do it just because its illegal and browsers of the web like us wouldn't mind taking our time to visit these sites to download the software. What they get out of it is the thousands of "hits" their sites get a day which makes them very happy. Anonymous and Account-Based FTP Sites FTP stands for File Transfer Protocol. FTP sites are around so that people can exchange software with each other and companies like Microsoft can distribute info and demos to users who visit their FTP site. Something they don't want happening is the distribution of their full-release products on "Pirate" FTP sites. "Pirate" FTP sites come and go. Most sites don't stay up for more than a day or two. They are also referred to as 0 day FTP sites. Its extremely difficult to logon to these sites becasuse they are usually full of leechers like us or require a username and password. FTP Sites That we Have Visited: ftp://ftp.epri.com ftp://ftp.dcs.gla.ac.uk ftp://204.177.0.18 ftp://207.48.187.133 ftp://192.88.237.2 ftp://153.104.11.94 ftp://208.137.11.105 ftp://194.85.157.2 Newsgroups There are over 20,000 newsgroups on the net. The majority of them are nonsense but if you happen to stumble upon the right one, you'll be able to get almost any crackor serial number for any game or program. Although programs and games are not abundant on newsgroups, you'll be able to obtain registered programs of such popular shareware like Winzip and Mirc and if you post trade requests, people will respond to your request. Newsgroups With Cracks, Serial #'s, Programs and Games News:alt.binaries.cracks News:alt.binaries.games News:alt.crackers News:alt.cracks News:alt.hacker News:alt.binaries.warez.ibm-pc News:alt.binaries.warez.ibm-pc.games News:alt.warez.ibm-pc Exchanging Through E-Mail It is illegal to send copyrighted programs and games through e-mail but does anyone really care? Everyday, there are hundreds and thousands of illegally attached programs and games sent through the net in the form of e-mail. Just visit any of the above newsgroups and you'll see listings of people who want to trade through e-mail. We placed an ad in news:alt.binaries.cracks requesting three programs: Magnaram 97, Qemm8.0 and Corel Draw 7. We managed to receive both Magnaram 97 and Qemm 8.0 through e-mail from some nice person but did not receive Corel Draw 7 most likely because it was not a reasonable demand. Modem Speeds Part of the reason nobody sent us Corel Draw 7 is because of the size of the program and the many hours it takes to upload and download it. The two most common modem speeds at the time that this report was written are 28.8kbps and 14.4kbps. Both speeds are considered to be extremely slow when it comes to transferring enormous amounts of data. Most of the programs and games nowadays are on CD-Roms which if full, contain 650MB of data. The new X2 Technology, Cable modems, ISDN modems and DirecPC satellite dishes could solve the long download time problems a little better considering that all the above mentioned modems are two to fourteen times faster in transferring data than the 28.8kbps modems. Cost of Pirated Software To The Industry Piracy cost companies that produce computer software $13.1 billion in lost revenue during 1995. The loss exceeded more than the combined revenues of the 10 largest personal computer software companies. The dollar loss estimates were up from the $12.2 billion in 1994 because of the spreading use of computers worldwide. Microsoft (The Big Loser) MS Windows 95 $179 MS Office Pro 95 $535 MS Project 95 $419 MS Publisher 97 $69 MS Visual C++ 4.0 $448 These are the prices they expect people to buy their software at. In Hong Kong, copies of these lucrative pieces software can be had for about five US dollars for all of them on one CD very easily. That will be further explained later. The Honest Consumer Software piracy harms all software companies and, ultimately, the end user. Piracy results in higher prices for honest users, reduced levels of support and delays in funding and development of new products, causing the overall breadth and quality of software to suffer. US Laws In 1964, the United States Copyright Office began to register software as a form of literary expression. The Copyright Act, Title 17 of the U.S. Code, was amended in 1980 to explicitly include computer programs. Today, according to the Copyright Act, it is illegal to make or distribute copyrighted material without authorization. The only exceptions are the user's right to make a copy as an "essential step" in using the program (for example, by copying the program into RAM) and to make a single backup copy for archival purposes (Title 17, Section 117). No other copies may be made without specific authorization from the copyright owner. In December 1990, the U.S. Congress approved the Software Rental Amendments Act, which generally prohibits the rental, leasing or lending of software without the express written permission of the copyright holder. This amendment followed the lead of the British Parliament (which passed a similar law, The Copyright, Designs and Patents Act, in 1988), and adds significant additional protection against unauthorized copying of personal computer software. In addition, the copyright holder may grant additional rights at the time the personal computer software is acquired. For example, many applications are sold in LAN (local area network) versions that allow a software package to be placed on a LAN for access by multiple users. Additionally, permission is given under special license agreement to make multiple copies for use throughout a large organization. But unless these rights are specifically granted, U.S. law prohibits a user from making duplicate copies of software except to ensure one working copy and one archival copy. Without authorization from the copyright owner, Title 18 of U.S. Code prohibits duplicating software for profit, making multiple copies for use by different users within an organization, downloading multiple copies from a network, or giving an unauthorized copy to another individual. All are illegal and a federal crime. Penalties include fines of up to $250,000 and jail terms up to five years (Title 18, Section 2320 and 2322). Business Software Alliance (BSA) The Business Software Alliance (BSA) promotes the continued growth of the software industry through its international public policy, enforcement, and education programs in 65 countries throughout North America, Europe, Asia, and Latin America. Founded in 1988, BSA's mission is to advance free and open world trade for legitimate business software by advocating strong intellectual property protection for software. BSA's worldwide members include the leading publishers of software for personal computers such as Adobe Systems, Inc., Apple Computer, Inc., Autodesk, Inc., Bentley Systems, Inc., Lotus Development Corp., Microsoft Corp., Novell, Inc., Symantec Corp., and The Santa Cruz Operation, Inc. BSA's Policy Council consists of these publishers and other leading computer technology companies including Apple Computer Inc., Computer Associates International, Inc., Digital Equipment Corp., IBM Corp., Intel Corp., and Sybase, Inc. Statistics of Software Piracy. Court Cases Inslaw vs. Dept. of Justice -Sued Justice Dept for Software piracy. -In 1982, Inslaw landed a $10M contract with the Justice Dept. to install PROMIS case-tracking software in 20 offices. -They allegedly spent $8M enhancing PROMIS on the assumption that they could renogotiate the contract to recoup the expenses. -But after the Justice Dept. got the source code, they terminated the contract pirated the code -By 1985, Inslaw was forced into bankruptcy. -Owners kept fighting and the case ended up in the US Bankruptcy Court -In Feb. '88, Inslaw was awarded $6.8M in damages plus legal fees Novell and Microsoft Settle Largest BBS Piracy Case Ever -Scott W. Morris, operator of the Assassin's Guild BBS, agreed to pay Microsoft and Novell $73,00 in cash and forfeit computer hardware valued at More than $40,000 -In the raid, marshals seized 13 computers, 11 modems, a satellite dish, 9 gigs of online data, and over 40 gigs of off-line data Novell Files Software Piracy Suits Against 17 Companies in California -The suits allege that the defendants were fraudulently obtaining Novell upgrades and/or counterfeiting NetWare boxes to give the appearance of a a new product -The suit follows Novell's discovery that the upgrade product was being sold in Indonesia, the United Kingdom, United Arab Emirates, as well as the US F.B.I. Reveals Arrest in Major CD-Rom Piracy Case -The first major case of CD-Rom piracy in the United States -A Canadian father and son were found in possession of 15,000 counterfeit copies of Rebel Assault and Myst that were being sold at 25% of the retail value -Both men were free on bail f:\12000 essays\technology & computers (295)\X Software Piracy45.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Software piracy is the failure of a licensed user to adhere to the conditions of a software license or the unauthorized use or reproduction of copyrighted software by a person or entity that has not been licensed to use the software. Software piracy has become a household word and a household crime and has had a great affect on the software industry. It is a problem that can only be solved by the choices of each individual. The computer software industry is one of the great business success stories of recent history, with healthy increases in both hardware and software sales around the world. However, software piracy threatens the industry's economic future. According to estimates by the U.S. Software Publisher's Association, as much as $7.5 billion of American software may be illegally copied and distributed annually worldwide. These copies work as well as the originals and sell for significantly less money. Piracy is relatively easy, and only the largest rings of distributors are usually caught. In addition, software pirates know that they are unlikely to serve hard jail time when prisons are overcrowded with people convicted of more serious crimes. The software industry loses more than $15.2 billion annually worldwide due to software piracy. Software piracy costs the industry: $482 every second $28,900 every minute $1.7 million every hour $41.6 million every day $291.5 million every week To understand software piracy, one must get inside the mind of the pirate. People, who wouldn't think of sneaking merchandise out of a store or robbing a house, regularly obtain copies of computer programs which they haven't paid for. The pirate has a set of excuses for his actions: prices are too high; the company doesn't provide decent support; he's only going to use the program once in a while. Although, what really makes software piracy seem less bad than other kinds of theft is that nothing is physically taken. There is no immediate effect on the inventory or productive capacity of the creator of a piece of software if someone 500 miles away copies a disk and starts using it. People tend to think of property as a material thing, and thus have a hard time regarding a computer program as property. However, property is not a concept pertaining to matter alone. Ownership is a concept which comes out of the fact that people live by creating things of value for their own use or for trade with others. Creation does not mean making matter, but rather changing the form of matter alongwith an idea and a purpose. Most often, the actual cost of creating goods is determined in the production of individual items. With software, the reverse is true. The cost of producing copies is negligible compared with the cost of constructing the form of the product. In both cases, though, the only way a producer can benefit from offering his product in trade, is for others to respect his right to it and to obtain it only on his terms. If people are going to make the production of software a fulltime occupation, they should expect a return for their efforts. If they do not receive any benefit, they will have to switch to a different sort of activity if they want to keep working. The thief, though, will seldom be caught and punished; his particular act of copying isn't likely to push a software publisher over the edge. In most cases, people can openly talk about their acts of piracy without suffering criticism. However, there is a more basic deterrent to theft than the risk of getting caught. A person can fake what he is to others, but not to himself. He knows that he is depending on other people's ignorance or willingness to pretend they haven't noticed. He may not feel guilty because of this, but he will always feel helpless and out of control. If he attempts to rationalize his actions, he becomes dependent on his own self-ignorance as well. Thieves who abandon honesty often fall back on the idea of being smart. They think it's stupid to buy something when they can just take it. They know that their own cleverness works only because of the stupidity of others who pay for what they buy. The thieves are counting on the failure of the very people whose successful efforts they use. The best defense against software piracy lies neither in physical barriers to copying nor in stiffer penalties. The main prevention to theft in stores is not the presence of guards and magnetic detectors, but the fact that most people have no desire to steal. The best way to stop piracy is to instill a similar frame of mind among software users. This means breaking down the web of excuses by which pirates justify their actions, and leaving them to recognize what they are. Ultimately, this is the most important defense against any violation of people's rights; without an honest majority, no amount of effort by the police will be effective. In almost all countries of the world, there are statutes, criminal and civil, which provides for enforcement of copyrighted software programs. The criminal penalties range from fines to jail terms or both. Civil penalties may reach as high as $100,000 per infringement. In many countries, companies as well as individuals may face civil and criminal sanctions. There are several different types of software piracy. Networking is major cause to software piracy. Most licenses to software is written so that the program can only be installed on one machine and can only be used on one machine at a time, however, with some network methods, the program can be loaded on several machines at once, therefore a violation of the agreement. On some network applications, the speed of transporting the software back and forth is too slow, and therefore, copying the program onto each machine would be so much faster, and this could be a violation of the license agreement. End-user Copying is a form of piracy when individuals within organisations copy software programs from co-workers, friends and relatives. This is the most prevalent form of software theft. Some refer to end user copying as 'disk swapping'. Hard disk loading happens when unlicensed software is downloaded onto computers that you buy. Generally you, as the customer will have an original program on your hard drive that you may or may not have paid for. However, you will not receive the accompanying disks or documentation and you will therefore not be entitled to technical support or upgrades. This practice is often used as a sales feature or an added incentive by the dealer to entice the sale. Software rental is a form of piracy that takes place when an individual rents a computer with software loaded on it or the software itself from a rental shop or computer retailer. The licence agreement clearly states that the purchaser is prohibited from engaging in the rental of the software. This often occurs in the form of a rental, and then a re-stocking charge when the software is returned to the retailer. Counterfeit software involves both low quality disks and high quality fakes that are extremely close in appearance to the original software. Stealing via bulletin boards is one of the fastest growing means of software theft. It involves downloading programs onto computers via a modem. OEM unbundling can occur at either the Original Equipment Manufacturer (OEM) level or at the retailer. Unbundling involves the separating of OEM software from the hardware that it is licensed to be sold with. The product is clearly marked 'For Distribution With New PC Hardware Only' and is designed so that it cannot be sold on the retail shelf. The customer can run into support issues as it is the OEM that is required to provide support for this type of software. When you buy unbundled software you take a bigger risk of purchasing a counterfeit product. In conclusion, software piracy has had a major impact on the software industry. Economically it has cost the industry billions of dollars each year and there is no sign that this will change in the near future. No amount of penalties or policing will stop the trend of software piracy. Each individual must develop their own moral standards so that they do not add to the problem. f:\12000 essays\technology & computers (295)\X Telecommuting27.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ As defined in Webster's New World Dictionary, Third Edition, telecommuting is "an electronic mode of doing work outside the office that traditionally has been done in the office, as by computer terminal in the employee's home." Basically, it is working at home utilizing current technology, such as computers, modems, and fax machines. Traditionally, people have commuted by cars, buses, trains, and subways, to work and back. Through the innovation of telecommuting, , the actual necessity to change location in order to accomplish this task has been challenged on the basis of concerns for energy conservation, loss of productivity, and other issues. One advantage of telecommuting is energy conservation. A tremendous amount of energy is required to produce transportation equipment such as automobiles, buses trains, and subways. If telecommuting is promoted, there will be less use of this equipment and less energy will be required for production, maintenance, and repair of this equipment. Fuel resources needed to operate this equipment will be reduced. The building and repair of highways and maintenance requires a large consumption of energy, not only in the operation of the highway construction and repair equipment, but also in the manufacture and transportation of the required materials. An increase in the percentage of people telecommuting to work will decrease the need for expanded highways and associated road maintenance. The first two areas related to getting to work. Once a person arrives at a central office working location, he or she represents another energy consumer, often times magnified many times over what would be required at home. The office building has heating, cooling, and lighting needs, and the materials to build it and maintain it require energy in their production and transportation. Working from home requires only modest incremental demands on energy for heating, cooling, and lighting needs, and makes effective use of existing building space and facilities. Telecommuting also improves productivity. Much time is spent on unnecessary activities by people who commute back and forth to work in the conventional manner. Time is wasted from the minute one gets up to go to work until the minute one returns home from work. With telecommuting, one no longer needs to be always preparing for the commute and for being "presentable". One can go to work simply by tossing on a robe and slippers, grabbing a cup of coffee and sitting down to the terminal. You would no longer have to worry if the car will start, if your clothes are neat, or if you're perfectly groomed. That may still be important to you, but it no longer has to be. And you are no longer interrupted by the idle chatter that inevitably takes place at the central work place - some of it useful for your work, but a lot of it is just a waste of time and a perpetual interruption. As quoted in Computerworld, one telecommuter comments "I was feeling really cramped in our old office. I find I can get much more done. It is much more quiet here at home." In addition, telecommuting reduces family related stress by allowing involvement with family and flexibility in location of a remote worksite. Working in the home offers people a greater opportunity to share quality time with family members, to promote family values and develop stronger family ties and unity. Also, time saved through telecommuting could be spent with family members constructively in ways that promote and foster resolution of family problems. Since the actual location a telecommuter works from isn't relevant, the person could actually move to another town. This would alleviate the stress caused when a spouse has an opportunity to pursue his or her career in another town and must choose between a new opportunity or no opportunity, because their spouse does not want to or cannot change employment. If either person could telecommute, the decision would be much easier. Also, telecommuting promotes safety by reducing high way use by people rushing to get to work. There are thousands of traffic-related deaths every year and thousands more people severely injured trying to get to work. In addition there is substantial property loss associated with traffic accidents that occur as people take chances in order to make the mad dash from home to the office. Often times people have mad the trip so often that they are not really alert, often falling asleep and frequently becoming frustrated by the insistence that they come into the office every day, when, in fact, most, if not all of their work could be accomplished from their home or sites much closer to their home. Telecommuting, however does have its disadvantages. The most obvious disadvantage is the overwhelming cost of starting a telecommuting program. A study by Forrester Research, Inc. reveals "that it costs $30,000 to $45,000 a head to" train prospective telecommuters. After the first year, however, "per-user spending [is] cut to about $4,000", also, "employees are starting to see telecommuting policies as a benefit, and companies offering it will be more competitive." Another disadvantage is the psychological impact is may have on employees. "Executives who have labored for years to win such corporate status symbols as secretaries and luxurious corner offices are reluctant to shed their hard-won perks." Some employees also complain that their "creativity... has been dampened" by lack of interaction with their co-workers. Despite the disadvantages, though, telecommuting is a viable option to any future plan to preserve and protect our environment from encroachment and pollution caused by auto emissions and the consumption of land by enlarged highways and an increasing area for parking. A telecommuting program can be put in place by following a few tips from Mindy Blodgett in her article "Lower costs spur move to more telecommuting": "Form a telecommuting team that includes technical experts, upper managers and human resources staff, and assign a telework coordinator." "Contact other companies to learn from their experiences." "Train participants and supervisors." "Monitor the program through surveys before and after a pilot." Measuring productivity in actual dollars is difficult. The actual productivity is best measured by the satisfaction and enjoyment by employees. Bibliography Bjerklie, David and Partick E. Cole. "Age of the road warrior." Time 145.12 (1995): 38- 40. Blodgett, Mindy. "Lower costs spur move to more telecommuting." Computerworld 30.45 (1996) 8. Blodgett, Mindy. "Telecommuting pilot test proves space-saving plan." Computerworld 30.46 (1996) 81-82. Webster's New World Dictionary of American English, Third College Edition. Victoria Neureldt, Ed. 1988 New York 1375. f:\12000 essays\technology & computers (295)\X The Communications Decency Act48.TXT +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Communications Decency Act The Communications Decency Act that was signed into law by President Clinton over a year ago is clearly in need of serious revisions due, not only to its vagueness, but mostly due to the fact that the government is infringing on our freedom of speech, may it be indecent or not. The Communications Decency Act, also know by Internet users as the CDA, is an Act that aims to remove indecent or dangerous text, lewd images, and other things deemed inappropriate from public areas of the net. The CDA is mainly out to protect children. In the beginning, the anonymity of the Internet caused it to become a haven for the free trading of pornography. This is mainly what gives the Internet a bad name. There is also information on the Net that could be harmful to children. Information on how to make home-made explosives and similar info such as The Jolly Rodgers and the Anarchist's Cookbook are easily obtained on the Net. Pedophiles (people attracted to child porn) also have a place to hide on the Internet where nobody has to know their real name. As the average age of the Internet user has started to drop, it has became apparent that something has to be done about the pornography and other inappropriate info on the net. On February 1, 1995, Senator Exon, a Democrat from Nebraska, and Senator Gorton, a Republican from Washington, introduced the first bill towards regulating online porn. This was the first incarnation of the Telecommunications Reform Bill. On April 7, 1995, Senator Leahy, a Democrat from Vermont, introduces bill S714. Bill S714 is an alternative to the Exon/Gorton bill. This bill commissions the Department of Justice to study the problem to see if additional legislature (such as the CDA) is even necessary. The Senate passed the CDA as attached to the Telecomm reform bill on June 14, 1995 with a vote of 84-16. The Leahy bill does not pass, but is supported by 16 Senators that actually understand what the Internet is. Seven days later, several prominent House members publicly announce their opposition to the CDA, including Newt Gingrich, Chris Cox, and Ron Wyden. On September 26, 1995, Senator Russ Feingold urges committee members to drop the CDA from the Telecommunications Reform Bill. On Thursday, February 1, 1996, Congress passed (House 414-9, Senate 91-5) the Telecommunications Reform Bill, and attached to it the Communications Decency Act. This day was known as "Black Thursday" by the Internet community. One week later, it was signed into law by President Clinton on Thursday, February 8, 1996, also known as the "Day of Protest." The punishment for breaking any of the provisions of the bill is punishable with up to 2 years in prison and/or a $250,000 fine. On the "Day of Protest," thousands of home-pages went black as Internet citizens expressed their disapproval of the Communications Decency Act. Presently there are numerous organizations that have formed in protest of the Act. The groups include: the American Civil Liberties Union, the Voters Telecommunications Watch, the Citizens Internet Empowerment Coalition, the Center for Democracy & Technology, the Electronic Privacy Information Center, the Internet Action Group, and the Electronic Frontier Foundation. The ACLU is not just involved with Internet issues. They fight to protect the rights of many different groups. (ex. Gay and Lesbian Rights, Death Penalty Rights, and Women's Rights) The ACLU is currently involved in the lawsuit of Reno vs. ACLU in which they are trying to get rid of the CDA. In addition to Internet users turning their homepage backgrounds black, there was the adoption of the Blue Ribbon, which was also used to symbolize their disapproval of the CDA. The Blue Ribbons are similar to the Red Ribbons that Aids supports are wearing. The Blue Ribbon spawned the creation of "The Blue Ribbon Campaign." The Blue Ribbon's Homepage is the fourth most linked to site on the Internet. Only Netscape, Yahoo, and Webcrawler are more linked to. To be linked to means that they can be reached from another site. It's pretty hard to surf around on the Net and not see a Blue Ribbon on someone's site. On the day that President Clinton signed the CDA into law, a group of nineteen organizations, from the American Civil Liberties Union to the National Writers Union, filed suit in federal court, arguing that it restricted free speech. At the forefront of the battle against the CDA is Mike Godwin. Mike Godwin is regarded as one of the most important online-rights activists today. He is the staff counsel for the Electronic Frontier Foundation, and has "won fans and infuriated rivals with his media savvy, obsessive knowledge of the law, and knack for arguing opponents into exhaustion." Since 1990 he has written on legal issues for magazines like Wired and Internet World and spoken endlessly at universities, at public rallies, and to the national media. Although this all helped the cause, Godwin didn't become a genuine cyberspace superhero until what he calls the "great Internet sex panic of 1995." During this time, Godwin submitted testimony to the Senate Judiciary Committee, debated Christian Coalition executive director Ralph Reed on Nightline, and headed the attack on the study of online pornography. The study of online porn became the foundation of "Time Magazine's" controversial July 3 cover story, "On a Screen Near You: Cyberporn." Time said the study proved that pornography was "popular, pervasive, and surprisingly perverse" on the Net, but Godwin put up such a fight to the article that three weeks later, the magazine ran a follow-up story admitting that the study had serious flaws. The CDA is a bad solution, but it is a bad solution to a very real problem. As Gina Smith, a writer for Popular Science, has written, "It is absolutely true that the CDA, is out of bounds in it's scope and wording. As the act is phrased, for example, consenting adults cannot be sure their online conversations won't land them in jail." Even something as newsstand-friendly as the infamous Vanity Fair cover featuring a pregnant and nude(but strategically covered) Demi Moore might be considered indecent under the act, and George Carlin's famous 'seven dirty words' are definitely out. CDA supporters are right when they say the Internet and online services are fertile playgrounds for pedophiles and other wackos bent on exploiting children. Now, parents could just watch over their children's shoulder's the whole time that they are online, but that is both an unfair and an impractical answer. There are two answers, either a software program that blocks certain sites could be installed, or parents could discipline their kids so that they would know better than to look at pornography. The latter would appear to be the better alternative, but that just isn't practical. If kids are told not to do something, they are just going to be even more curious to check out porn. On the other hand, many parents are less technologically informed than their kids. Many would not know how to find, install, and understand such programs as CyberPatrol or NetNanny. The future of the CDA seems to be fairly evident. It doesn't look like the CDA is going to be successful. In addition to the Act being too far reaching in its powers, it is virtually unenforceable. As with anything in print, much of the material on the Internet is intelligent and worthy of our attention, but on the other hand, some of it is very vulgar. The difficulty in separating the two rests in the fact that much of the Internet's value lies in its freedom from regulation. As Father Robert A. Sirico puts it, "To allow the federal government to censor means granting it the power to determine what information we can and cannot have access to." Temptations to sin will always be with us and around us so long as we live in this world.